cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Hi everyone,   Maybe you got my email, I just wanted to post this also here.   I built a CSS generator for a few widgets, 8 at the moment. This is on a cloud instance accessible by anyone.   Advantages:      - don't have to style buttons and apply style definitions over and over again      - greater flexibility in styling the widget      - you don't have to write any CSS code   The way this works: you use the configurator to style the widget as you want. Use also a class to define what that widget is, or how it's styled. For example "primary-btn", "secondary-btn", etc. Copy the generated CSS code into the CustomCSS tab in ThingWorx and on the widget, put the CustomClass specified in the configurator.   As a best practice, I'd recommend placing all your CSS code into the Master mashup. And then all your mashups that use that master will also get the CustomCSS. So the only thing you have to do to your widgets in the mashups, is fill the CustomClass property with the desired generated style. Also, comment your differently styled widgets by separating them with /* My red button */ for example.   The mashups for this won't be released, this will only be offered as a service. As you'll see in the configurator, they are not that pretty, the main goal was functionality.   Here is the link to the configurator: https://pp-18121912279c.portal.ptc.io/Thingworx/Runtime/index.html#master=CSSMaster&mashup=ButtonVariables User: guest Password: guest123123   Give it a go and have fun! 🙂   NOTE: I will add more widgets to this in the future and will not take any requests in making it for a specific widget, I make these based on usage and styling capabilities.    
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
Are you ready for #WorldIoTDay?
View full tip
Hello! I have just written a tutorial on how to set up Lua to be run from the command line. As many already know, there is no good way to debug Lua scripts as they are used in package deployment in Software Content Manager, and building such a debugger is a vast and difficult undertaking. As an alternative, running small portions of code in the command line to ensure they will work as expected is one way to verify the validity of the Lua syntax prior to attempting a deployment. Here are the steps to set up Lua as a command line tool:   Grab the GCC compiler called TDM-GCC Run the exe file and follow the install instructions Remember the install directory for this, as the attached install script will need to be configured in a later step Default location in the install file is "C:\Program Files\TDM-GCC\" Note: leaving the "Add to PATH" selected will allow you to compile C code on the command line as well by typing "gcc" (this is not required for this set-up) Download the Lua source code ​This comes as a ".tar.gz" file, which can be tricky to extract in Windows Download 7zip for freeware which can unzip this type of archive Extract the Lua source code and navigate to the top level directory which should just contain one folder named like "lua-x.x.x" where the x's refer to the version Download and extract the attached zip file containing the build file Copy the "build.cmd" file from this to the top level directory of the Lua source code Modify the configuration as needed: Version number default is 5.3.4 (parameter is called lua_version) GCC install path default is "C:\Program Files\TDM-GCC\bin" (parameter is called lua_build_dir) Double click the "build.cmd" file A console window will appear with installation details If you see the following, then it worked successfully: The "lua\" directory will be created in the same folder as the "build.cmd" file Copy the "lua\" directory to "C:\Program Files\" Open "Computer" > "System Properties" > "Advanced system settings" Click "Environment Variables" > "New..." Call the variable "LUA" and make the value "C:\Program Files\lua\bin" Find the "Path" variable and click "Edit..." At the end of what is already there (do NOT delete anything that is already there), add "%LUA%" (make sure there is a ";" between the previous path and this entry) Click "Ok" Open a new command prompt (has to be new to load the new path) and type "lua" to see if it works Example syntax test from Lua command line:   I hope this is helpful to people! Let me know if you have any questions!
View full tip
Design and implement a full application that runs without human interaction in the food and beverage world   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 3 hours.     1. ThingWorx Solutions in Food Industry Part 1 Part 2 Part 3 2. Factory Line Automation Part 1 Part 2 Part 3 3. Automated Distribution and Logistics  Part 1 Part 2 4. Securing Industry Data 
View full tip
Alerts are a special type of event.  Alerts allow you to define rules for firing events.  Like events, you must define a subscription to handle a change in state.  All properties in a Thing Shape, Thing Template, or Thing can have one or more alert conditions defined.   You can even define several of the same type of alert.  When an alert condition is met, ThingWorx throws an event. You can subscribe to the event and define the response to the alert using JavaScript.  Events also fire when a property alert is acknowledged and when it goes out of alert condition.   Alert Types Alerts have conditions which describe when the alert is triggered.  The types of conditions available depend upon the property type.  For example, string alerts may be triggered when the string matches pre-set text.  A number alert may be set to trigger when the value of the number is within a range.   EqualTo: Alert is triggered when the defined Value is reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. NotEqualTo: Alert is triggered when the defined Value is not reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. Above: Alert is triggered when the defined Limit is exceeded or met (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. Below: Alert is triggered when the alert value is below the defined Limit or meets it (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. InRange: Alert is triggered when a value is between a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. OutofRange: Alert is triggered when a value is outside a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. DeviationAbove: Alert is triggered when the property value minus the alert Value is greater than the alert Limit ((property value - alert value) > alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is greater than or equal to the alert Limit ((property value - alert value) >= alert Limit).   By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. DeviationBelow: Alert is triggered when the property value minus the alert Value is less than the alert Limit ((property value - alert value) < alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is less than or equal to the alert Limit ((property value - alert value) <= alert Limit). By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. Anomaly: Alert is triggered when the property value falls outside of an expected pattern as defined by a predictive model.  Applies to Integer, Long, and Number base types.   Alert types are specific to the data type of the property.  Properties configured as the following base types can be used for alerts:   Boolean Datetime Infotable Integer Location Number String   Creating an Alert When creating an alert:   You can set it to be enabled or disabled Alerts must have a ThingWorx-compatible name and can optionally contain a description You must set the limit(s) to determine when the event fires If an Include Limit is included, the event fires when the Limit Condition is met Not including the Limit causes the event to fire when the Limit Condition is surpassed The priority is a metadata field that enables the addition of a priority. It does not impact the Event/Subscription handling or sequence because the system fires events off asynchronously.   Steps to create or modify an Alert:   Select an existing Property or create a new Property for a Thing, Thing Shape, or Thing Template for which to create/update the Alert Click Manage Alerts Click the New Alert drop-down and select the appropriate Alert Type Note:  The available fields will be vary depending on data type of the Property   Deselect Enabled if you do not wish to make the Alert enabled at the present time (Alert is enabled by default) Provide a Name and optional Description for the Alert Enter a Limit (numeric properties) Select Include Limit? if the value entered in the Limit field should trigger the Alert   Select the appropriate Priority. (The Priority is a metadata field for searching and categorization only.  It does not affect the order of processing, CPU or memory usage.) After defining an Alert, you can click New Alert to add additional alerts of either the same or different condition. You can also click Add New to add additional alerts of the same condition. When all Alerts have been created, click Update Click Done Once all Properties have been updated as needed, click Save   Once Alerts are defined, they appear on the Properties page (while in Edit mode).       After an Alert is defined, a Subscription to that Alert can be configured to launch the appropriate business logic, such as notifying a user of an Event through email or text message.     Monitoring Alerts   When an Alert condition is met, ThingWorx fires off an Alert. You can create a Subscription to the Alert so that you are automatically notified when an Alert is triggered.  Alerts are written to the alert history file and can be viewed through the Alert Summary and Alert History Mashups. The system tracks acknowledged and unacknowledged alerts. Alerts do not fire redundant events. For example, if a numeric property has a rule defined that generates an alert when the value is greater than 50, and a value = 51, an alert is generated and an alert event will fire. If another value comes in at 53 before the original alert is acknowledged, another event will not be fired because the current state is still greater than 50.   The Alert History and Alert Summary streams provide functionality to monitor alerts in the system.  Alert History is a comprehensive log that records all information recorded into the alert stream, where the data is stored until manually removed.   The Alert Summary provides the ability to filter by all alerts, unacknowledged alerts, or acknowledged alerts. You can also acknowledge alerts on a selected property or all alerts from a particular source (thing).   This information can be retrieved using Scripts as well, so you can create your own Alert Summary and History mashups.   From the ThingWorx header, choose Monitoring > Alert History. All Alerts are listed here. Click the Alert Summary Click the Unacknowledged tab to view alerts that have not been acknowledged. Choose to acknowledge an alert on a property or on the source. Type a message in the corresponding field. Click Acknowledge.   For each alert, the following displays: Property name. Source thing – lists the thing that contains this property with the alert. Timestamp – indicates when the alert was triggered. Name and type of alert. Duration – details how long the alert has been active. AckBy – indicates if the alert has been acknowledged and, if so, by whom and when. Message – defaults to the condition but is overwritten with the acknowledge message if one exists. Alert description.    The Alert History screen displays all Alerts that were once in an alert condition, but have moved out of that alert condition.  A Data Filter is provided at the top of the mashup to more easily find a particular Source, Property, or Alert.   The Alert History report is a Thingworx Mashup created using standard Thingworx functionality.  This means that any developer has the ability to re-create this report or a modification of this report.       Acknowledging Alerts   An acknowledgement (ack) is an indication that someone has seen the alert and is dealing with it (for example, low helium in an MRI machine and someone is filling it).  Alert History shows when alerts were acknowledged and any comments.   You can acknowledge an alert on a property or on the source. A source acknowledgment acknowledges all alerts on the source Thing for the selected alert in Monitoring > Alert Summary. A property acknowledgment (ack) only acknowledges the alerts on the property for the selected alert in Alert Summary.   For example, you create a Thing with two properties that have alerts set up. You put both properties in their alert states. View Alert Summary and select the Unacknowledged tab. You should see two alerts. Select one, and do a property acknowledgement. The alert you selected moves to the Acknowledged tab and is removed from the Unacknowledged tab. Put both properties in their alert states again, select one of the alerts on the Unacknowledged tab, and this time do a source acknowledgement. In this case, both alerts move to the Acknowledged tab, even though you only selected one of them.    For more information about Alerts, click here. To view a tutorial video on alerts, click here. Refer to this article for best practices affecting alerts.
View full tip
There are some scenarios where you don't necessarily want to connect to your corporate mail server, or a public mail server like gmail - e.g. when testing a new function that possibly spams the official mail servers - or the mail server is not yet available. In such a scenario it might be a good idea to use a custom, private mail server to be able to send and receive emails locally on a test- or development-environment.   In this post I will show how to use the hMailServer and setup the ThingWorx mail extension to send emails. This post will concentrate on installing and deploying within a Windows environment. More specifically on a Windows 2012 R2 server virtual machine.   Installing hMailServer   Download and install the ​.NET Framework 3.5 (includes .NET 2.0 and 3.0)​. In Windows Server 2012 R2 open the Server Manager and in the Configuration add roles and features.​ Click through the "Role-based or feature-based installation" steps and install the ".NET Framework 3.5 Features" in case they are not already installed.   Download ​hMailServer​ via https://www.hmailserver.com/download As always: the latest version is more stable while the beta versions might provide more functionality and additional bug fixes. This post is based on version 5.6.6-B2383. Functionality and how-to-clicks might change in other versions.   Note: The Microsoft .NET Framework is required for this installation. In case the .NET installation fails by installing it with the hMailServer framework, it's best to cancel the installation and install the required .NET Framework manually instead of the automatic download and installation offered by hMailServer. In case of such a failure it's best to play it safe and uninstall the mail server again, install the .NET framework manually and then re-install hMailServer. (Any left-over directories should be deleted before re-installing)   For the installation, choose your path and a ​full​ installation. Use the built-in database engine​, set a password for the administrative user and install.   Configuring hMailServer   The hMailServer Administrator opens automatically after the installation - if not you will find it in the Start menu. Connect to the default instance on the localhost. The password is the one set up during the installation process.   ​Add a domain​ (e.g. mycompany.com) and ​save​ it. The domain will specify the domain of the mail-addresses e.g. user@domain (me@mycompany.com).   In the domain add an account​. Specify the address (e.g. noreply) and set a password (e.g. ts). ​Save​ the new account.   The default port used for SMTP is 25​. For POP3 it's ​110​. This is configured under Settings > Advanced > TCP/IP ports​ Ensure the ports for SMPT and POP3 are not blocked by a firewall in case you run into issues later on.   This setup should *usually* work. However there might be hostname specific SMTP issues. In case something happens / or to avoid errors in the first place, go to Settings > Protocols > SMTP > Delivery of e-mail​ and specific the ​Local host name​. This should be the fully qualified hostname of the server (e.g. myserver.this.company.com).   Test hMailServer via telnet   Note: telnet needs to be installed for this test - in case it's not installed, Google can help.​​​   Open a command line window and execute: telnet <yourhostname> 25 This will open a connection to the SMTP port of the hMailServer. Manual commands can be send to test if the basic send functions are working. The following structure can be used for testing - it holds manual input and responses.   Username and password need to be Base64 encoded. See https://www.base64encode.org/ for Base64 conversions. (Tip: only text, don't add additional spaces or line breaks - otherwise the hash will be quite different!)   Command / Response Description 220 <HOSTNAME> ESMTP Connected to host HELO mycompany.com Connect with domain as defined in hMailServer 250 Hello. Connected AUTH LOGIN Login as authenticated user 334 VXNlcm5hbWU6 Base64 for "Username:" bm9yZXBseUBteWNvbXBhbnkuY29t Base64 for "noreply@mycompany.com" 334 UGFzc3dvcmQ6 Base64 for "Password:" dHM= Base64 for "ts" 235 authenticated. Authentication successful MAIL FROM: noreply@mycompany.com Sender address 250 OK   RCPT TO: <your real mail address> To address 250 OK   DATA ​Body​ 354 OK, send.   Subject: sending mail via telnet ​Subject ​line   Blank line to indicate end of subject just a simple test! ​Content​ . . indicates the end of mail 250 Queued (10.969 seconds) Mail queued and sent with duration QUIT Log off telnet 221 goodbye   Connection to host lost. Log off confirmed     Configuring ThingWorx   Download and configure the mail extension   Download the MAIL EXTENSION from the ThingWorx Marketplace https://marketplace.thingworx.com/Items/mail-extension   In ThingWorx, click Import / Export > Extensions > Import​, choose the downloaded .zip file and import it. The Composer should be refreshed to reflect the changes introduced by the extension.   The Extension created a new Thing Template: MailServer​   Create a new ​Thing​ based on the MailServer Template​. In its configuration adjust the servername and port to match the hMailServer configuration, e.g. localhost and port 25. Change the Mail User Account and Password to the authentication user (e.g. noreply@mycompany.com / ts). ​Save​ the configuration to persist the changes.   In any Thing, create a new Service to send mails and notifications. Insert a snippet based on Entities > <yourMailThing> > Send Message​ Call the service manually for an initial functional test. It should look similar to this... but parameters need to be adjusted to your environment:   var params = {   cc: undefined /* STRING */,   bcc: undefined /* STRING */,   subject: "sending email via ThingWorx" /* STRING */,   from: "noreply@mycompany.com" /* STRING */,   to: "<your real mail address>" /* STRING */,   body: "just a simple test!" /* HTML */ };   // no return Things["<yourMailThing>"].SendMessage(params);   Check your mailbox for incoming messages!   What next?   The mail server can also be used to receive emails. So instead of sending mails to your regular mail address and risking a ton of spam (depending on your services and frequency of sending automated emails), you could also configure a local Outlook / Thunderbird / etc. installation and send mails directly to the noreply@mycompany.com address. Those mails can then be downloaded from hMailServer via POP3.   With this the whole send AND receive mechanism is contained within a single (virtual) machine.
View full tip
  Create An Application Key Guide   Overview   In order for a device to send data to the Platform, it needs to be authenticated. One authentication method is to use an Application Key. Application Keys, or appKeys, are security tokens used for authentication in ThingWorx. They are associated with a given User and have all of the permissions granted to the User with which they are associated. This is one of the most common ways of assigning permission control to a connected device. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Learning Path Overview   This guide explains the steps to create a ThingWorx Application Key, and is part of a Learning Path. You can use this guide independently from the full Learning Path. If you want to learn to create a ThingWorx Application Key, this guide will be useful to you. When used as part of the Industrial Plant Learning Path, you should already have installed ThingWorx Kepware Server. We will use the Application Key to send information from ThingWorx Kepware Server into ThingWorx Foundation. Other guides demonstrate Foundation's Mashup Builder to construct a website dashboard that displays information from ThingWorx Kepware Server. We hope you enjoy this Learning Path.   Step 2: Create Application Key   Application Keys are assigned to a specific User for secure access to the platform. Using the Application Key for the default User (Administrator) is not recommended. If administrative access is absolutely necessary, create a User and place the User as a member of the SecurityAdministrators and Administrators User groups. Create the User the Application Key will be assigned to. 1. On the Home screen of Composer click + New. 2. In the dropdown list, click Applications Key. 3. Give your application key a name (i.e. MyAppKey). 4. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. 5. Set the User Name Reference to a User you created. 6. Update the Expiration Date field, otherwise it will default to 1 day. 7. Click Save. A Key ID has been generated and can be used to make secure connections.   IP Whitelisting for Application Keys   One of the features of an Application Key is the ability to set an IP whitelist. This allows the server to specify that only certain IP addresses should be able to use a given Key ID for access. This is a great way to lock down security on the platform for anything that will maintain a static IP address. For example, connected Web-based business systems may have a static IP from which all calls should be made. Similarly, you can use wildcards to specify a network to narrow the range of IP addresses allowed while still offering some flexibility for devices with dynamic IP addresses. Extremely mobile devices should likely not attempt to implement this, however, as they will often change networks and IP addresses and may lose the ability to connect when the IP whitelist feature is used.   Interact with Application Keys Programmatically Service Name Description GetKeyID Returns the ID of this Application Key GetUserName Get the username associated with this Application Key IsExpired Returns if this Application Key is expired ResetExpirationDateToDefault Resets the expiration date of the Application Key to the default time based on configuration in the UserManagement subsystem SetClientName Sets the client name for this Application Key SetExpirationDate Sets the expiration date of this Application Key to a provided date SetIPWhiteList Sets the values for the IP whitelist for this Application Key SetUserName Sets the associated user name for this Application Key   Tip: To learn more about Application Keys, refer to the Help Center   Step 3: Next Steps   Congratulations! You have successfully created an application key. We hope you found this guide useful.     The next guide in the Connect and Monitor Industrial Plant Equipment learning path is Install ThingWorx Kepware Server.    The next guide in the Azure MXChip Development Kit learning path is Connect Azure IoT Devices.   The next guide in the Medical Device Service learning path is Use the Edge MicroServer (EMS) to Connect to ThingWorx.   The next guide in the Using an Allen-Bradley PLC with ThingWorx learning path is Model an Allen-Bradley PLC.
View full tip
In this post, I will use an instance of InfluxDB and Chronograf. See this post for installing both using Docker. InfluxDB - Time Series Databases   InfluxDB is a time series database. It allows users to work with and organize time series data. The advantage of such a database system is that it comes with built-in functionality to easily aggregate and operate on data based on time intervals. Other types of databases can do this as well - but time series databases are heavily optimized for this kind of data structures which will show in storage space and performance.   Data is stored in the database with its timestamp, its value and one or more tags.   Time Temperature Humidity Location 2019-01-24T00:00:00 23 42 Home 2019-01-24T00:01:00 22 43 Home 2019-01-24T00:02:00 21 44 Home 2019-01-24T00:03:00 23 45 Home 2019-01-24T00:04:00 24 42 Home 2019-01-24T00:05:00 25 43 Home 2019-01-24T00:06:00 23 44 Home   Values can be aggregated by intervalls, i.e. "give me the temperatur values within the last hour and take the average for 5 minutes". This would result in (60 / 5) = 12 results with a value that represents the average temperature within this 5 minute interval.   Example: Temperature Data averaged by 4 minutes   Time Temperature 2019-01-24T00:00:00 (23 + 22 + 21+ 23) / 4 = 22,25 2019-01-24T00:04:00 (24 + 25 + 23) / 3 = 24   To find out more about InfluxDB see also https://www.influxdata.com/time-series-database/ and https://www.influxdata.com/time-series-platform/   InfluxDB in ThingWorx   The new ThingWorx 8.4 release comes with an option to setup InfluxDB as additional Persistence Provider. Meta Data like Entity Definitons will still be stored in PostgreSQL. Streams, Value Streams and Data Tables however can be stored in InfluxDB.   The InfluxDB Persistence Provider setup is delivered with the PostgreSQL installation package for ThingWorx. Currently ThingWorx does not allow any aggregation of data with its built-in InfluxDB capabilities.   Prepare InfluxDB   InfluxDB will need a user and a database. Connect via Chronograf - the graphical UI to administer InfluxDB and create a new user via   InfluxDB Admin > Users Default username = twadmin Default password = password Permissions = ALL   Create a new database via   InfluxDB Admin > Databases Default database name = thingworx   Configure ThingWorx   Create a new Persistence Provider for InfluxDB in ThingWorx - but don't mark it as active yet!     Switch to the Configuration and change the username / password, database and hostname to match your installation.     Save the configuration, switch back to the General tab and mark the InfluxDB Persistence Provider as Active.   Save again and a "successful" message will be shown. If the save action failed, the connection settings are not correct - check for the correct ports and for any typos.   Creating Entities & Testing   Streams, Value Streams and Data Tables can now be created using the new InfluxDB Persistence Provider.   To test with a Value Stream   Create a new Thing with some NUMBER properties, e.g. 'a', 'b' and 'c' as properties - ensure they are marked as logged as well Name = InfluxValueStreamThing Create a new ValueStream based and change its Persistance Provider to the InfluxDB created above Name = InfluxValueStream Save both Entities Setting values for the properties will now automatically create the entries in InfluxDB - including the Entity name "InfluxValueStreamThing" Running the QueryPropertyHistory service on the Thing will return the results as an InfoTable In Chronograf this will display like this:   ThingWorx 8.4 will be released end of January 2019. Be sure to check out and test the new Persistence Provider features!
View full tip
Today we're going to learn how to use the Axeda Platform SDK v2 APIs to upload a file to the platform and create a software package.  This document is a work in progress, but we're going to show you everything you need to get started.  In my case I am using the very useful and easy to use Postman REST Client app available from the Chrome Store.  I'll be using some terms below (API Object Names) that can be found in the documents listed in the bibliography at the end of this article. Assumptions (Replace these with your own versions): username:  joe, password: password1! platform instance:  axedaplatform.example.com First things first, we need to authenticate to the platform and get a session id (header x_axeda_wss_sessionid). (Note: Postman does not automatically URL encode query parameters - this can be especially important for the password) GET:  https://axedaplatform.example.com/services/v1/rest/Auth/login?principal.username=joe&password=password1! You'll receive a response like this following: <ns1:WSSessionInfo xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns1:WSSessionInfo">     <ns1:created>2015-06-02T15:16:49 +0000</ns1:created>     <ns1:expired>false</ns1:expired>     <ns1:sessionId>1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5</ns1:sessionId>     <ns1:sessionTimeout>1800</ns1:sessionTimeout> </ns1:WSSessionInfo>                Excellent, now we have a session id! For the rest of the API calls (unless otherwise indicated), all of the following headers are set to the following: x_axeda_wss_sessionid: 1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5 Content-Type: application/xml Accept: application/xml The next step is to get our ModelReference: POST:  https://axedaplatform.example.com/services/v2/rest/model/findOne <?xml version="1.0" encoding="UTF-8"?> <ModelCriteria xmlns="http://www.axeda.com/services/v2"> <modelNumber>MyModelName</modelNumber> </ModelCriteria>          Which will return output like: <v2:Model xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         id="MyModelName" systemId="6141" label="managed" detail="MyModelName"         restUrl="https://sandbox.axeda.com/services/v2/rest/model/id/6141">     <v2:name>MyModelName</v2:name>     <v2:modelNumber>MyModelName</v2:modelNumber>     <v2:autoRegisterAssets>false</v2:autoRegisterAssets>     <v2:type>MANAGED</v2:type> ... </v2:Model>          The key piece of information we need from that request is the systemId. A little bit about our file (lorem-ipsum.txt): Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. File-size: 307 MD5 Sum: 22b229c7ecc49cfa11255beb06c7f4fe The next step is to create a FileUploadSession and upload our file.  This will create for us the FileInfoReference we need to create our SoftwarePackage. PUT:  https://axedaplatform.example.com/services/v2/rest/file/session BODY: <?xml version="1.0"?> <FileUploadSession xmlns='http://www.axeda.com/services/v2'>   <files>     <file xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:type='FileInfo'>       <filename>lorem-ipsum.txt</filename>       <md5>22b229c7ecc49cfa11255beb06c7f4fe</md5>       <filesize>307</filesize>       <contentType>application/text</contentType>     </file>   </files>   <expirationDate/>   <status/>   <updatedDate/>   <username/>   <version/> </FileUploadSession>              And our response if all goes OK (HTTP 200) looks like the following: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success xsi:type="v2:FileUploadSessionSuccessfulOperation">             <v2:ref>16265</v2:ref>             <v2:id>16265</v2:id>             <v2:uploadUri>sftp://DISABLED</v2:uploadUri>             <v2:session systemId="16265" label="16265" detail="16265"                 restUrl="https://sandbox.axeda.com/services/v2/rest/file/id/16265">                 <v2:files>                     <v2:file xsi:type="v2:FileInfo" id="1068731" systemId="1068731"                         label="lorem-ipsum.txt" detail="1068731"> ... </v2:success> </v2:succeeded> </v2:ExecutionResult>         In this case, we just need the value of <v2:file systemId>, which is 1068731. TIME TO UPLOAD THE FILE CONTENTS!!! PUT: https://axedaplatform.example.com/services/v2/rest/file/1068731/content/ Extra Headers: X-File-Name: lorem-ipsum.txt X-File-Size: 307 Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW BODY:  There needs to be a mime-part called 'file-content' that contains the contents or lorem-ipsum.txt ----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file-content"; filename="cfk-lorem-ipsum.txt" Content-Type: text/plain ----WebKitFormBoundary7MA4YWxkTrZu0gW         Note:  If using Postman, SoapUI or other automated tool, this will be handled automatically for you - do not specify a Content-Type header in this case. And our response, assuming an HTTP 200: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>1068731</v2:ref>             <v2:id>1068731</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        This is just confirming our success!  Excellent.  Now we come to the SoftwarePackage.  We need two key pieces of information, the ModelReference (6141) and the FileInfoReference (1068731): POST: https://axedaplatform.example.com/services/v2/rest/softwarePackage Headers: Our defaults, Content-Type and x_axeda_wss_sessionid BODY: <?xml version="1.0" encoding="UTF-8"?> <SoftwarePackage xmlns="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <name>TEST-REST-PACKAGE</name>   <model systemId="6141" />   <version>1.0.0.1</version>   <primaryAgentsOnly>true</primaryAgentsOnly>   <retriesEnabled>true</retriesEnabled>   <instructions>     <instruction xsi:type="DownloadFileInstruction">       <file xsi:type="FileInfo" systemId="1068731"/>       <destinationDirectory>C:\temp</destinationDirectory>       <compressed>false</compressed>       <executable>false</executable>       <pathRelative>false</pathRelative>       <overwriteExistingEnabled>true</overwriteExistingEnabled>     </instruction>   </instructions> </SoftwarePackage>        And our results: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>TEST-REST-PACKAGE||1.0.0.1</v2:ref>             <v2:id>45863</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        And PROOF! I hope this helps you in your projects, and helps demystify the Axeda Platform REST API a little for you. Regards, -Chris Bibliography (Documents available from Support Portal): Axeda v2 API/Services Developer's Reference Guide_6.8 Axeda Platform Web Services Developer Reference v2REST_6.8 Change History: 2015-09-24 : Change HTTP Methods of session create and content send to PUT from POST
View full tip
Data Model Introduction    Overview   This project will introduce the ThingWorx Foundation Data Model. Following the steps in this guide, you will consider data interactions based on user needs and requirements, as well as application modularity, reusability, and future updates. We will teach you how to think about a properly constructed foundation that will allow your application to be scalable, flexible, and more secure. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Benefits   A Data Model creates a uniform representation of all items that interact with one another. There are multiple benefits to such an approach, and the ability to break up items and reuse components is considered a best practice. ThingWorx has adopted this model at a high level to represent individual components of an IoT solution. Feature Benefit Flexibility Once a model has been created, it is simple to update, modify, or remove components without needing to rework the system or retest existing components. Scalability It’s easy to clone and modify devices that are either identical or similar when changing from a Proof of Concept or Pilot Program to a Scaled Business Model. Interoperability Seamlessly plug into other applications. Collaboration A Data Model allows pre-defined links between components, meaning that various parts can be defined when designing the model so that multiple people can work on those individual parts without compromising the interoperability of the components. Seamless platform A Data Model allows for seamless integration with other systems. A properly-formed model will make it easier to create high-value IoT capabilities such as analytics, augmented/virtual reality, industrial connectivity, etc.   Step 2: Entities   Entities   Building an IoT solution in Foundation begins with defining your Data Model, the collection of Entities that represent your connected devices, business processes, and your application. Entities are the highest-level objects created and maintained in Foundation, as explained below.     Thing Shape   Thing Shapes provide a set of characteristics represented as Properties, Services, Events, and Subscriptions that are shared across a group of physical assets. A Thing Shape is best used for composition to describe relationships between objects in your model. They promote reuse of contained Properties and business logic that can be inherited by one or more Thing Templates. In Foundation, the model allows a Thing Template to implement one or more Thing Shapes, which is similar to a class definition in C++ that has multiple inheritance. When you make a change to the Thing Shape, the change is propagated to the Thing Templates and Things that implement that Thing Shape; so, maintaining the model is quick and easy.   Thing Template   Thing Templates provide base functionality with Properties, Services, Events, and Subscriptions that Thing instances use in their execution. Every Thing is created from a Thing Template. A Thing Template can extend another Thing Template. When you release a new version of a product, you simply add the additional characteristics of the version without having to redefine the entire model. This model configuration provides multiple levels of generalization of an asset. A Thing Template can derive one or more additional characteristics by implementing Thing Shapes. When you make a change to the Thing Template, the change is propagated to the Things that implement that Thing Template; so again, maintaining the model is quick and easy. A Thing Template can be used to classify the kind of a Thing or asset class or as a specific product model with unique capabilities. If you have two product models and their interaction with the solution is the same (same Properties, Services, and Events), you could model them as one Thing Template. Classifying Thing Templates is useful for aggregating Things into collections, which are useful in Mashups. You may want separate Thing Templates for indexing, searching, and future evolutions of the products   Thing   Things are representations of physical devices, assets, products, systems, people, or processes that have Properties and business logic. All Things are based on Thing Templates (inheritance) and can implement one or more Thing Shapes (composition). A Thing can have its own Properties, Services, Events, and Subscriptions and can inherit other Properties, Services, Events, and Subscriptions from its Thing Template and Thing Shape(s). How you model the interconnected Things, Thing Templates, and Thing Shapes is key to making your solution easy to develop and maintain in the future as the physical assets change. End users will interface with Things for information in applications and for reading/writing data.   Best Practice: Create a Thing Template to describe a Thing, then create an instance of that Thing Template as a Thing. This practice leverages inheritance in your model and reduces the amount of time you spend maintaining and updating your model.   Step 3: Inheritance Model   Defining Things, Thing Templates, and Thing Shapes in your Data Model allows your application to handle both simple and complex scenarios. Entity Function Thing Shapes Assemble individual components. Thing Templates Combine those components into fully functional objects. Thing Unique representation of a set of identical components defined by the Thing Template.       In this example, there is a Parent/Child model between two related Thing Templates. NOTE: Things and Thing Templates may only inherit ONE Thing Template. Both Things and Thing Templates may inherit any number of Thing Shapes. Thing Templates employ a linear-relationship, while Thing Shapes employ a modular-relationship. Any Thing or Thing Template may have any number of sub-components (i.e. Thing Shapes), but each Thing or Thing Template is just one description of one object as a whole. How you decide to compartmentalize your Data Model into Thing Shapes and Thing Templates to create the actual Things that you’ll be using is a custom design that will be specific to each implementation.   Step 4: Scenario   The ThingWorx Data Model provides a way for you to describe your connected devices and match the complexity of a real-world scenario. Things, Thing Templates, and Thing Shapes are building blocks that define your data model.     You can define the components of Things, Thing Templates, and Thing Shapes, including Properties, Services, Events, and Subscriptions. Component Definition Properties Each Property has a name, description, and a data type (Base Type). Depending on the base type, additional fields may be enabled. A simple scalar type, like a number or string, adds basic fields like default value. More complex base types have more options. Properties can be static (i.e. Model Number) or dynamic (i.e. Temperature). Services A Service is a method/function defined by a block of code that performs logic specifying actions a Thing can take. There are several implementation methods, or handlers (for example: Script, SQLQuery, and SQL command), for services depending on the template you use. The specific implementation of a user-defined Service is done via a server-side script. The Service can then be invoked through a URL, a REST client capable application, or by another Service in ThingWorx. When you create a new service, you can define input properties and an output. You can define individual runtime permissions for each Service. Events Events are triggers that define changes of state (example: device is on, temperature is above/below threshold) of an asset or system and often require an action to correct or respond to a change. Business logic and actions in a ThingWorx application are driven by Events. Subscriptions Action associated with an Event, primary method to set up intelligence in ThingWorx model which enable you to optimize/automate. Subscriptions use Javascript code to define what you want your application to do when the Event occurs.   NOTE: Anything inherited by a Thing Template or Thing will inherit the associated Components.     This diagram shows what a specific Inheritance Model might look like for a connected Tractor. There is one master Template at the top. In this case, it’s a collection of similar types of tractors. The parent Template inherits a few Shapes - an Engine and a Deck that have been used in previous designs. Importing them as Shapes allows us to reuse previous design work and expedite the development process. One of the child Templates incorporates another Shape, this time in the form of a GPS tracking device. Then, at the bottom, there are the specific tractors with individual serial numbers that will report their connected data back to an IoT Application.   Step 5: Next Steps   Congratulations! You've successfully completed the Data Model Introduction, and learned about: The function of a data model for your IoT application Data model components, including Thing, Thing, Shape and Thing Template How ThingWorx components correspond to connected devices Please comment on this post so we can improve this guide in future ThingWorx version iterations.   This guide is part of 2 learning paths: The next guide in the Getting Started on the ThingWorx Platform learning path is Configure Permissions.  The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Design Your Data Model.      
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
Please note that the below configuration is intended for testing purposes only.  Make sure that your final deployment is within your business security policies. The installation guide can be found at: http://support.ptc.com/WCMS/files/173161/en/ThingWorxDockerInstaller.pdf Postgres: Reference the Installation Guide above for a supported version of Postgres Once deployed, configure it to support remote connections: Navigate to: <PostgresInstallPoint>\data Open the following with a text editor: pg_hba.conf Find the line with IPv4 local connections Change 127.0.0.1/32 to 0.0.0.0/0 Restart PostgreSQL server NOTE: This could open up security vulnerabilities to the database, so make sure you take appropriate security measures if the data will be sensitive Docker: Find the appropriate Docker platform for your OS Docker Community Edition For Windows Server 2016, there is a download for the Edge (Windows Server 2016) under the above link -> Docker CE for Windows -> And then scroll down a little bit Docker Toolbox If you try to deploy the Docker Community Edition on a system that doesn't support, it will direct you to this installation instead At some point during or after the installation, it will prompt you to enable Hyper-V If this is a physical server, these settings will be in your Bios For VMWare, while the VM is powered down, go to VM-> Virtual Machine Properties -> Hardware -> Processors -> Enable 'Virtualize Intel VT-x/EPT or AMD-V/RVI' Restart, and make sure Docker is running (whale icon in your system tray for the Windows Server 2016 edge version) With Docker running, open a command prompt and look at your IP settings For windows Server 2016, right click the start menu -> Command Prompt (admin) and run IPCONFIG Write down the IP assigned to DockerNAT, as this is will be your Postgres HOST later Share your main drive with Docker In Windows Server 2016, right click the docker icon in the system tray -> Settings -> Shared Drives -> C: Thingworx Installation: At this point you should have Docker installed and Postgres remotely configured with only the admin user (postgres) The installer will create the image/container inside of Docker, Install Tomcat, and configure your database Below is a capture of the settings used in the above screenshots.  Anything not listed (like specifying the container name, which is twxfoundation by default) was left as the default values:       Installation Directory: C:\Program Files (x86)\twxEnterpriseFoundationPostgresDocker       ThingWorx License Directory: C:\Users\Administrator\Desktop\license.bin       Local ThingWorx Foundation Port: 8080       Java Initial Heap setting for TWX Foundation: 1024       Java Max Heap setting for TWX Foundation: 2048       RDS Instance: 1     PostgreSQL Host: 10.0.75.1       PostgreSQL Port: 5432       PostgreSQL Admin Schema: postgres       PostgreSQL Admin Username: postgres       PostgreSQL Admin Password: <see note>       PostgreSQL ThingWorx Foundation Schema: thingworx       PostgreSQL ThingWorx Foundation Username: thingworx       PostgreSQL ThingWorx Foundation Password: <see note>       PostgreSQL ThingWorx Tablespace Location: /                     ​NOTE:​ It is highly recommended to use a complex password (Letters of all cases, numbers, and symbols) as we have opened up our database to remote connections RDS was set to Yes (Default is no) PostgreSQL Host is the IP taken from the earlier steps In this example, the Tablespace location is defined inside of Docker, not Windows Post Install: Confirm that Thingworx is running properly by opening a broswer and attempting to log in For our example, the URL is http://10.0.75.1:8080/Thingworx Troubleshooting: If the installation fails, refer to the end of the Installation Guide on where to look for logs, and items that need to be cleaned up before attempting to install again If the install was successful, but connecting fails, run the following in the command prompt to look at the Docker Server's startup logs for hints: Docker logs -f twxfoundation *Note that twxfoundation is the default during installation.  If this was changed in your installation, use that instead
View full tip
Hello, There have been some inquires about how can one use AngularJS for developing custom parts that can run in the ThingWorx environment. To address these inquires I have created a document that describes the process of integrating AngularJS with ThingWorx. The document attached comes with the source code for the examples presented throughout the document and an extension for AngularJS 1.5.8 and angular-material components. Feedback is appreciated. Thank you.
View full tip
There are a lot of new and exciting changes included in ​ThingWorx 8.0.0, which is due out today, and no doubt, you'll want to get a test environment set up to try them out.  But there are a few changes that could impact your ability to get up and running with a new server that I wanted to share with everyone. IMPORTANT – Changes to Licensing in ThingWorx 8.0.0 In ThingWorx 7.4.0, a new Licensing Subsystem was introduced, and the license file required was provided as part of the ThingWorx 7.4.0 platform download.  As of ThingWorx 8.0.0, the license.bin file will no longer be provided with the platform download, but will instead need to be downloaded from the PTC Licensing Tool on the PTC eSupport Portal. As part of the operating system specific Installing ThingWorx (OS) sections in the Installing ThingWorx 8.0 guide, additional steps have been added that outline how to use the PTC Licensing Tool to download and install the ThingWorx license file for your organization. The license file downloaded from the ThingWorx Licensing Tool must be renamed to license.bin and copied to the /ThingworxPlatform directory prior to starting ThingWorx for the first time.  The server will fail to start if it cannot find this license file. For more information:      KCS Article 264374 - ThingWorx 8.0.0 Licensing IMPORTANT – Default Administrator Password for ThingWorx Composer is Changing in 8.0.0 In order to help encourage the use of secure passwords for the Administrator account on ThingWorx servers, the default Administrator account password will be changing in version 8.0 and above.  The new Administrator password is a complex password containing mixed case and special characters. This will encourage administrators to change their default, fully-privileged account password to one that is more secure and conforms to their organizational security standards. Information about the new default password can be found in each of the operating system specific Installing ThingWorx (OS) sections of the Installing ThingWorx 8.0 guide. PTC strongly advises against the use of any default password on any product, and encourages administrators to immediately change any default password to a proper, complex password. For more information: KCS Article 264270​ - ​Unable to log into ThingWorx 8.0 Composer using the default Administrator login Application Key Usage is Changing in 8.0.0 As part of an effort to better secure the ThingWorx Platform, the use of Application Keys as URL parameters (through the ‘appkey’ parameter) is being deprecated in ThingWorx 8.0.0. For example, the following URL uses the now deprecated ‘appkey’ parameter to pass in an application key:               https://thingworx.server.com/ThingWorx/Things/MyThing/Services/MyService?appkey=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx When included in the URL, the ‘appkey’ parameter becomes a browser-cacheable, clear-text rendition of a usable application key.  A knowledgeable user could retrieve this application key from their browser history and use it to perform unauthorized actions against a ThingWorx server. For full security, GET and POST requests that are run against a ThingWorx server should be performed over an HTTPS connection and should include the required application key in the request’s headers.  Query string parameters are visible in both HTTP and HTTPS contexts, and by moving the application key into the request headers, the application key itself is encrypted as part of the HTTPS request. (Note that the application key in a header is still visible for plaintext HTTP connections though!) By default, new installations of ThingWorx 8.0.0 will have the Allow Application Key as URL Parameter option disabled.  Upgrades from previous versions of ThingWorx will retain the ability to use the ‘appkey’ query string parameter, but PTC strongly encourages customers to promptly update any solutions that are dependent on sending the appkey using a URL parameter to move to request headers instead.  Please note that a future release of ThingWorx may completely disable the use of application keys as URL parameters. The Allow Application Key as URL Parameter option can be enabled or disabled on the ThingWorx PlatformSubsystem’s configuration page, accessible at System > Subsystems > PlatformSubsystem > Configuration. For more information: KCS Article 264349 - ThingWorx Appkey URL Parameter is Deprecated as of ThingWorx 8.0.0 Changes to Default Visibility (Organizations) in ThingWorx 8.0.0 Starting in ThingWorx 8.0.0, the Everyone organization will no longer be granted default visibility across all entity collections.  Only users who are a member of the Administrator group will be able to see ThingWorx entities on a newly installed server; any additional visibility permissions will need to be explicitly granted by the Administrator. This is a change in behavior from the previous ThingWorx releases where the Everyone organization (of which the all-encompassing Users group was a member) was granted visibility access by default to all entity collections. Visibility permissions had to explicitly be removed by the Administrator during solution development, which could be overlooked. For more information: KCS Article 264351 - Changes to Default Visibility (Organizations) in ThingWorx 8.0.0
View full tip
    Build extensions quickly and extend your application functionality with the Eclipse Plugin.   GUIDE CONCEPT   Extensions enable you to quickly and easily add new functionality to an IoT solution. Extensions can be service (function/method) libraries, connector templates, functional widgets, and more.   The Eclipse Plugin for ThingWorx Extension Development (Eclipse Plugin) is designed to streamline and enhance the creation of extensions for the ThingWorx Platform. The plugin makes it easier to develop and build extensions by automatically generating source files, annotations, and methods as well as updating the metadata file to ensure the extension can be imported.   These features allow you to focus on developing functionality in your extension, rather than worrying about getting the syntax and format of annotations and the metadata file correct.   YOU'LL LEARN HOW TO   Install the Eclipse Plugin and Extension SDK Create and configure an Extension project Create Services, Events and Subscriptions Add Composer entities Build and import an Extension   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete all parts of this guide is 60 minutes.   Step 1: Completed Example    Download the attached file needed for this tutorial: ExtensionSampleFiles.zip.   The ExtensionSampleFiles.zip file provided to you contains a completed example of the scenario you will be walk through in the following steps. Utilize this file if you would like to see a finished example as a reference or if you become stuck during this guide and need some extra help.     Step 2: Download Plugin and SDK    The ThingWorx Extension SDK provides supported classes and APIs to build Java-based extensions. The APIs included in this SDK allow manipulation of ThingWorx platform objects to create Java based extensions that can extend the capability of the existing ThingWorx platform.   The Eclipse Plugin assists in working with the Extension SDK to create projects, entities, and samples. Download the Eclipse Plugin. Download the Extension SDK.. Make a note of the directory where the plugin and the extension SDK are stored. The path of the directory will be required in upcoming steps. Do not extract the zip files.     Step 3: Install and Configure   Before you install the plugin, ensure that software requirements are met for proper installation of the plugin. Open Eclipse and choose a suitable directory as a workspace. Go to the menu bar of the Eclipse window and select Help->Install New Software… After the Install window opens, click Add to add the Eclipse Plugin repository. Click Archive… and browse to the directory where the Eclipse Plugin zip file is stored and click Open. Enter a name (for example, Eclipse Plugin).     Click OK. NOTE: Do not extract this zip file. Ensure that the Group items by category checkbox is not selected. Select ThingWorx Extension Builder in the items list of the Install window. Click Next and the items to be installed are listed Click Next and review the license agreement. Accept the license agreement and click Finish to complete the installation process.   NOTE: If a warning for unsigned content is displayed, click OK to complete the installation process. Restart Eclipse. When Eclipse starts again, ensure that you are in the ThingWorx Extension perspective. If not, select Window->Perspective->Open Perspective->Other->ThingWorx Extension, then click OK.      NOTE: Opening any item from File->New->Other…->ThingWorx will also change the perspective to ThingWorx Extension.     You are ready to start a ThingWorx Extension Project!     Click here to view Part 2 of this guide.  
View full tip
Introduction The Edge MicroServer (EMS) and Lua Script Resource (LSR) are Edge software that can be used to connect remote devices to the ThingWorx platform. Using a Gateway is beneficial because, this will allow you to run one instance of the EMS on a server and then many instances of the LSR on different devices all over the world. All communication to the platform will be handled by this one EMS Gateway server. The EMS Gateway can be set up in two different types of scenarios: Self-Identifying Remote Things and Explicitly defined Remote Things​. The scenario I'm going to discuss below will involve explicitly defined Remote Things, a ThingWorx server, an EMS, and a LSR. We will need at least 1 server to run the ThingWorx platform and EMS, but these can always be on separate servers as well. We will also need some other machine or device that will run the LSR. Visit the support downloads page to find the latest EMS releases. The LSR is contained within the EMS download. You can also navigate to the Edge Support site to read more about the EMS and LSR oif this is the first time you have ever configured one. The "ThingWorx WebSocket-based Edge MicroServer Developer's Guide" is also provided inside of the zip file that contains the EMS for further information. Setting up the EMS Once we have obtained the EMS download from the support site (see the section above for links) we can begin creating our config.json file. The image below is a working config.json file for using the EMS as a Gateway. The settings in here are particular to my personal IP addresses and Application Key, but the concept remains the same, and I will go into further detail on the necessary sections, below the image. ws_servers The host and port parameters are always set to the IP address and port that the ThingWorx platform is being hosted on When the EMS and ThingWorx platform are on the same server, "localhost" can be used instead of an IP address appKey The appKey section is the value of an Application Key in the ThingWorx platform that should be used for the authentication of the EMS to the platform An Application Key will need to be created and assigned a user with proper priveledges prior to authenticating certificates The certificates section should be validating and pointing to proper certificates, but in the example above I am not validating any certificates for the sake of simplicity More can be read about the certificates sections here logger The logging section is out of scope of this article, but further reading on ​logger​ configurations can be found here The section in the example above will work for basic logging needs http_server The http_server section configuration parameters will tell the EMS what host and port to spin up a server on and if there is authentication necessary by any LSRs trying to connect The LSR has settings that will explicitly call out whatever value is set to the host and port in this section, so make sure to set these to an open port that is not in use or blocked behind a firewall Further reading on the http_server section can be found here auto_bind You can see above that there are two objects defined in the auto_bind section. One of these is binding the EMS to an EMSGateway Thing in the platform called "EdgeGateway" and the other is defined in the config.lua file for the LSR The gateway parameter is set to true only in the object, "EdgeGateway", that is being used for the EMS to bind to The host and port defined for the "OtherEdgeThing" should point to the port and IP address that the LSR is running on in the other device By default, the LSR runs on port 8001, but you can always double check the listening port by finding the Process Identification (PID) number of the luaScriptResource.exe and then matching the PID to the corresponding line item in the output of netstat -ao command in a console window The protocol can be set to "http" in an example application, but make sure to use "https" when security is of concern All further reading on the sections of the config.json file can be found in the config.json.complete file included with the EMS download and on the Edge Help Center under the "Creating a Configuration File" section and the "Viewing All Options" section. Setting up the LSR In this example, the LSR is going to run on a separate server and point to the EMS server. Below is a screenshot of two very important additions (rap_host and rap_port) to the default config.lua file: rap_host The rap_host field should be set to the IP address where the EMS is hosted rap_port The rap_port field should be set to the port parameter defined in the config.json http_server section script_resource_host The ​script_resource_host​ field must be set to ensure that the EMS will know what IP address to communicate with the LSR at scripts.OtherEdgeThing This line is necessary to identify what the name of the LSR is that will register with the EMS to bind to the platform "OtherEdgeThing" can be changed to anything, but make sure that the auto_bind section in the config.json aligns with what you've defined in the config.lua file at this line Running the EMS and LSR Now that we have configured the LSR and EMS to point to each other and the platform we can try running both of these applications to make sure we are successful. Make sure the ThingWorx platform is running Create a RemoteThing with the name given in the auto_bind section for the LSR we are connecting Create an EMSGateway with the name given in the auto_bind section for the EMS as a Gateway to bind to Start the EMS This can be done by double clicking the wsems.exe when in Windows, running it as a service, or running it directly from the command line Start the LSR This can be done by double clicking the luaScriptResource.exe when in Windows, running it as a service, or running it directly from the command line Navigate to the ThingWorx platform and make sure that the Things you have created are connected Do this by navigating to the Properties menu option and refreshing the isConnected property You should be able to browse remote properties and services for each bound RemoteThing, and this means you have successfully setup the EMS as a Gateway device to external LSR applications running on remote devices Any further questions about browsing remote properties or other configuration settings in the .config files is most likely addressed in the Edge Help Center under the EMS section​, and if not, feel free to comment directly on this document.
View full tip
Since the advent of 6.1.6 we've been able to access the body of a post in a Groovy script.  This frees us from the tyranny of those pesky predefined parameters and opens up all sorts of Javascript object-passing possibilities. To keep this example as simple as possible, there are only two files: postbody.html TestPostBody.groovy postbody.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"             "http://www.w3.org/TR/html4/loose.dtd"> <html> <head>     <title>Scripto Post Body Demo</title>     <meta http-equiv="content-type" content="text/html; charset=utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/>     <meta name="apple-mobile-web-app-capable" content="yes"/>     <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">     <meta http-equiv="refresh" content="3600"> </head> <body> <div id="wrapper"> <div id="header"> <h1>Scripto Post Body Demo</h1> </div> <p> Username:<input type="text" id="username" /><br /> Password:<input type="password" id="password"  /><br /><br /> Enter some valid JSON (validate it <a href="http://jsonlint.com/" alt="jsonlint">here</a> if you're not sure): <br /><textarea id="jsoninput" rows=10 cols=20></textarea><br /> Enter arbitrary Text: <br /><textarea id="textinput" rows=10 cols=20></textarea> <input type="submit" value="Go" id="submitbtn"  onclick="poststuff();"/> </p> <div id="response"></div> </div>         <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>         <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script>   <script type="text/javascript">         var poststuff= function(){             var data = {}             var temp             if ($("#jsoninput").val() != ""){                 try {                     temp = $.parseJSON($("#jsoninput").val())                 }                 catch (e){                     temp = ""                 }                 if (temp && temp != ""){                     data = JSON.stringify(temp)                 }             }             else if ($("#textinput").val() != ""){                 data.text = $("#textinput").val()                 data = JSON.stringify(data)             }             else data = {"testing":"hello"}             if ($("#username").val() != "" && $("#password").val() != ""){                 // you need contentType in order for the POST to succeed                 var promise = $.ajax({                     type:"POST",                     url: "http://dev6.axeda.com/services/v1/rest/Scripto/execute/TestPostBody?username=" + $("#username").val() + "&password=" + $("#password").val(),                     data: data,                     contentType:"application/json; charset=utf-8",                     dataType:"text"                 })                 $.when(promise).then(function(json){                     $("#response").html("<p>" + JSON.stringify(json) + "</p><br />Check your console for the object.<br />")                     console.log($.parseJSON(json))                     $("#jsoninput").val("")                     $("#textinput").val("")                 })             }         } </script> </body> </html> TestPostBody.groovy import net.sf.json.JSONObject import groovy.json.JsonSlurper import static com.axeda.drm.sdk.scripto.Request.*; try {     // just get the string body content     response = [body: body]     response.element = []     // parse the text into a JSON Object or JSONArray suitable for traversing in Groovy     // this assumes the body is stringified JSON     def slurper = new JsonSlurper()     def result = slurper.parseText(body)     result.each{ response.element << it } } catch (Exception e) {     response = [                 faultcode: 'Groovy Exception',                 faultstring: e.message             ]; } return ["Content-Type": "application/json","Content":JSONObject.fromObject(response).toString(2)] The "body" variable is passed in as a standalone implicit object of type String.  The key here is that to process the string as a Json object in Groovy, we send stringed JSON from the Javascript, rather than the straight JSON object. FYI: If you happen to be using Scripto Editor, you might like to know that importing the Request class disables the sidebar input of parameters.  You can enter the parameters in the sidebar, but if this import is included the parameters will not be visible to the script. To access the POST body through the Request Object, you can also refer to: Using Axeda Scripto
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
This blog post has been written in collaboration with nexiles GmbH, a PTC Software Partner   The Edge Micro Server (EMS) and the LUA Scripting Resource (LSR) allow for an easy connection of sensors and / or data to the ThingWorx Platform.   Connections are usually established through the HTTP protocol and a REST API which sends unencrypted data and user specific credentials or ThingWorx application keys. This could be fine for a non-production environment. However in a more secure environment or in production, encryption and authentication should be used wherever and whenever possible to ensure the safety and integrity of the data.       In this scenario a user, client machine or remote device can utilize the LUA endpoints via the REST APIs endpoints. For this an authentication mechanism with credentials is required to only allow authenticated sessions. Invokation of services etc. are all protected and encrypted via HTTPS. A default HTTP connection would transfer the credentials as clear text which is most likely not desired.   The LSR then communicates via HTTPS to the EMS.   The EMS communicates to the ThingWorx platform via the AlwaysOn protocol using an encrypted websocket on the platform side.   Prerequisite   ThingWorx is already fully configured for HTTPS. See Trust & Encryption Theory and Hands On for more information and examples   Securing the EMS   EMS to ThingWorx connection   The config.json holds information on the complete EMS configuration. To add a trusted and secure connection to the ThingWorx platform, the servers, connection type and certificates have to be adjusted.   Check the config.json.complete for more information and individual settings.   As an example the following configuration could be a rough outline.   Switch the websocket server port to 443 in ws_servers Declare the connection to the websocket as SSL / TLS in wc_connection Declare the certificate used by the ThingWorx platform in certificates Validate the certificate In case you're using a self-signed certificate, allow using it - otherwise just say "false" If not using a self-signed certificate, but a certificate based on a chain of trust, point to the full chain of trust in the cert_chain parameter I used the client certifcate in X509 (PEM) formatted .cer file I used the client certificate's private key as PKCS #8 (PEM) and encrypted it with the super-secure password "changeme" (no really, change it!)   With this the EMS can connect securely to the ThingWorx platform websocket.   EMS as HTTP(S) server   To secure incoming connections to the EMS, it first of all needs to act as a HTTP server which is then configured to use a custom certificate.   In config.json add the http_server section Define the host and port as well and set SSL to true The full path of the certificate (.cer file) must be provided   With this the EMS can receive client requests from the LSR through a secure interface, protecting the (meta) data sent from the LSR to the EMS.   For my tests I'm using a self-signed certificate - created in the Keystore Explorer and exported as X509 (PEM) formatted .cer file. The private key is not required for this part.   Authentication   To further secure the connection to the EMS acting as HTTP(S) server, it's recommended to use user and password for authentication.   With this, only connections are accepted, that have the configured credentials in the HTTP header.   config.json   {      "ws_servers": [ { "host": "supersecretems.ptc.com", "port": 443 } ],        "appKey": "<#appKey>",        "http_server":  {           "host": "localhost",           "port": 8000,           "ssl": true,           "certificate": "C:\\ThingWorx\\ems.cer",           "authenticate": true,           "user": "EMSAdmin",           "password": "EMSAdmin",           "content_read_timeout": 20000      },        "ws_connection": { "encryption": "ssl" },        "certificates": {           "validate": true,           "allow_self_signed": true,           "client_cert": "C:\\ThingWorx\\twx70.cer",           "key_file": "C:\\ThingWorx\\twx70.pkcs8",           "key_passphrase": "changeme"      }   }   Securing the LSR   LSR to EMS connection   All connections going from the LSR to the EMS are defined in the config.lua with the rap_ parameters.   To setup a secure connection to the EMS, we need to provide the server as defined in the config.json in the http_server section (e.g. the default localhost:8000). Define the usage of SSL / TLS as well as the certificate file.   Client to LSR connection   All connection going to the LSR from any client are defined in the config_lua witht the script_resource_ parameters.   To ensure that all requests are done via authenticated users, setup a userid and password. Configure the usage of SSL / TLS to encrypt the connection between clients and the LSR. A custom certificate is not necessarily required - the LSR provideds its own custom certificate by default.   Now opening https://localhost:8001 (default port for the LSR) in a browser will open an encrypted channel forcing authentication via the credentials defined above.   Of course this needs to be considered for other calls implementing e.g. a GET request to the LSR. This GET request also needs to provide the credentials in its header.   Authentication   It's recommended to also configure the LSR for using a credential based authentication mechanism.   When setting up the LSR to only accept incoming requests with credentials in the header, the script_resource_userid and script_resource_password can be used for authentication.   When connecting to an EMS using authentication, the rap_userid and rap_password can be used to authenticate with the credentials configured in the config.json   config.lua   The following configuration can be posted anywhere in config.lua - best place would be just below the log_level configuration.   scripts.rap_host = "localhost" scripts.rap_port = 8000 scripts.rap_ssl = true scripts.rap_deny_selfsigned = false scripts.rap_cert_file = "C:\ThingWorx\ems.cer" scripts.rap_server_authenticate = true scripts.rap_userid = "EMSAdmin" scripts.rap_password = "EMSAdmin" scripts.script_resource_userid = "admin" scripts.script_resource_password = "admin" scripts.script_resource_ssl = true   EMS specific configuration: rap LSR specific configuration: script_resource   Other considerations   When configuring the EMS and LSR for authentication and encrypted traffic, the configuration files hold information that not everyone should have access to.   Therefore the config.json and config.lua must be also protected from an Operating System point of view. Permissions should only be granted to the (process) user that is calling the EMS / LSR. Ensure that no one else can read / access the properties and certificates to avoid password-snooping on an OS level.   It's best to grant restricted access to the whole microserver directory, so that only privileged users can gain access.   You're next...   This blog should have given some insight on what's required and how it's configured to achieve a more secure and safer EMS / LSR integration in a real-life production environment. Of course it always depends on the actual implementation, but use these steps as a guideline to secure and protect your (Internet of) Things!
View full tip
Thingworx provides the capability to use JDBC to connect to Relational Databases. What would be the steps to take? 1. Find the proper JDBC JAR file, this can be easily located by keeping in mind your database and its version and doing an online search. 2. Download the JDBC Extension Creator from the Marketplace 3. Follow the instructions to create the actual JDBC extension you will be using 4. Create a Thing based on the ThingTemplate from the JDBC extension - This represents your actual connection to the database 5. Set up the configuration:      a. Connection String - Usually I use connectionstrings.com to find that      b. Validation String - This has to be a VALID SQL statement within the context of the database you are connecting to (Like SELECT SYSDATE FROM DUAL for Oracle)      c. Proper User Name and Password as defined in the database you are connecting to 6. SAVE 7. To check if you are properly connected, go back into Edit mode and go to Services, create a new SQL Query or Command and check Tables and Columns Tab. Actual Tables should show up now. 8. If it doesn't work, check your application log.
View full tip
Announcements