cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
Presentation associated with Recording of the Friday, November 17, 2017 ThingWorx Manufacturing Tips & Tricks Web Session. Agenda: - Overview & Application Demo - Aron Semle - Architecture Overview - Varathan Ranganathan - Q&A
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
Thingworx Analytics is offered through the User interface called Analytics Builder with some pre-configured functionality. However, should you want to create your own jobs and mashups, all features from Analytics Builder and some more are available through the Thingworx Services.  Running most functionality requires that you provide some data to run the Analytics Services. This is where the datasetRef parameter is required.        Data uploaded through Analytics Builder Any dataset uploaded through builder will require have a datasetUri as shown in the image above and format will be parquet (all small letters) datasetUri can be obtained from the list of datasets in builder Passing data as an in-body Dataset If data isn't uploaded through Analytics Builder, data can be supplied as an Infotable in the data parameter of the datasetRef. Metadata will also need to be supplied if a new dataset is being created (create Job of the AnalyticsServer_DataThing) If this data is being supplied for a scoring job, as long as the column names match up to what the model is expecting, TWX Analytics will inference them appropriately. The filter parameter is for parquet datasets already uploaded into TWXA and will take an ANSI SQL statement format to add conditions to reduce number of rows. Exclusions is an single column infotable list of the columns you wish to remove from the job you are trying to submit Example: If you want Profiles to only run on 5 out of 10 columns, you would give a list of 5 columns that you don't want to include in this exclusions infotable. Data may also be supplied as a csv file in the file repo in some cases, in which case you would give the dataseturi parameter the location of the file on the TWX File repo (of the format thingworx://UseCaseFileRepo/tempdata.csv) and the format which would be csv
View full tip
Video Author:                     Christophe Morfin Original Post Date:            March 31, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: This video walks you through the use of Analysis Replay to execute analysis events on historical data.    
View full tip
We are pleased to announce that the Expert Sessions video series is now available in the ThingWorx Community. We are kicking off this availability with a new space dedicated to these helpful technical videos. In the first round of videos, we are highlighting two ThingWorx Foundation videos that are designed to provide foundational knowledge to get you up and running on the ThingWorx IoT platform. New Expert Sessions Available Now ThingWorx Foundation - Installation is an introduction to installing the ThingWorx platform. The video includes information on the environment, prerequisites, and configuration steps when installing ThingWorx, and includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more. ThingWorx Foundation - Scalability talks about platform sizing with dependency on the type of environment and correlated scalability options. The video educates you about federation and high availability as well as provides visual diagrams to understand the architecture of different ThingWorx solutions. What is an Expert Session? Expert Sessions are focused, technical webcasts (both recorded and live) where PTC subject matter experts share knowledge and best practices on topics related to the design, development, deployment and operation of PTC software. Expert Sessions are designed using five categories: Get Started, Design, Develop, Deploy, and Operate. Additional Expert Sessions will be highlighted here in the ThingWorx Community every few weeks. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
ThingWorx Docker Overview and Pitfalls to Avoid    by Tori Firewind of the IoT EDC Containers are isolated and can run side-by-side on the same machine, but they share the host OS, making them more efficient in terms of memory usage and scalability.   Docker is a great tool for deploying ThingWorx instances because everything is pre-packaged within the Docker image and can be stored in a repository ready for deployment at any time with little configuration required.  By using a different container for every component of an application, conflicting dependencies can be avoided. Containers also facilitate the dev ops process, providing consistent application deployments which can be set up, taken down, and tested automatically using scripts.   Using containers is advantageous for many reasons: simplified configuration, easier dev ops management, continuous integration and deployment, cost savings, decreased delivery time for new application versions, and many versions of an application running side-by-side without any wasted resources setting them up or tearing them down. The ThingWorx Help Center is a great resource for setting up Docker and obtaining the ThingWorx Docker files from the PTC Software Downloads website. The files provided by PTC handle the creation of the image entirely, simplifying the process immensely. All one has to do is place the ThingWorx version and all of the required dependencies in the staging folder, configure the YML file, and run the build scripts. The Help Center has all of the detailed information required, but there are a few things worth noting here about the configuration process.   For one thing, the platform-settings.json file is generated based on the options given in the YML file, so configuration changes made within this configuration file will not persist if the same options aren’t given in the YML file. If using Docker Desktop to run an image on a Windows machine, then the configuration options must be given in an ENV file that can be referenced from the command used to start the image. The names of the configuration parameters differ from the platform-settings.json file in ways that are not always obvious, and a full list can be found here.   For example, if extension imports need to be enabled on a ThingWorx instance running in Docker, then the EXTPKG_IMPORT_POLICY_ENABLED option must be added to the environment section of the YML file like this:     environment: - "CATALINA_OPTS=-Xms2g -Xmx4g" # NOTE: TWX_DATABASE_USERNAME and TWX_DATABASE_PASSWORD for H2 platform must # be set to create the initial database, or connect to a previous instance. - "TWX_DATABASE_USERNAME=dbadmin" - "TWX_DATABASE_PASSWORD=dbadmin" - "EXTPKG_IMPORT_POLICY_ENABLED=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JARRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_CSSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSONRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_WEBAPPRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_ENTITIES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_EXTENTITIES=true" - "EXTPKG_IMPORT_POLICY_HA_COMPATIBILITY_LEVEL=WARN" - "DOCKER_DEBUG=true" - "THINGWORX_INITIAL_ADMIN_PASSWORD=Pleasechangemenow"   Note that if the container is started and then stopped in order for changes to the YML file to be made, the license file will need to be renamed from "successful_license_capability_response.bin" to "license_capability_response.bin" so that the Foundation server can rename it. Failing to rename this file may cause an error to appear in the Application Log, and the server to act as if no license was ever installed: "Error reading license feature info for twx_realtime_data_sub".   In Docker Desktop on a Windows machine, create a file called whatever.env and list the parameters as shown here: Then, reference this environment file when bringing up the machine using the following command in Powershell:      docker run -d --env-file h2.env -p 8080:8080 -v ${pwd}/ThingworxPlatform:/ThingworxPlatform -v ${pwd}/ThingworxStorage:/ThingworxStorage -it <image_id>     Notice in this command that the volumes for the ThingworxPlatform and ThingworxStorage folders are specified with the “-v” options. When building the Docker image in Linux, these are given in the YML file under the volumes section like this (only change the path to local mount on the left side of the colon, as the container mount on the right side will never change):      volumes: - ./ThingworxPlatform:/ThingworxPlatform - ./ThingworxStorage:/ThingworxStorage - ./tomcat-logs:/opt/apache-tomcat/logs     Specifying the volumes this way allows for ThingWorx logs and configuration files to be accessed directly, a crucial requirement to debugging any issues within the Foundation instance. These volumes must be mapped to existing folders (which have write permissions of course) so that if the instance won’t come up or there are any other issues which require help from Tech Support, the logs can be copied out and shared. Otherwise, the Docker container is like a black box which obscures what is really going on. There may not be any errors in the Docker logs; the container may just quit without error with no sign of why it won’t stay up. Checking the ThingWorx and Tomcat logs is necessary to debugging, so be sure to map these volumes correctly.   Once these volumes are mapped and ThingWorx is successfully making use of them, adding a license file to the Docker instance is simple. Use the output in the ThingworxPlatform folder to obtain the device ID, grab a valid license file, and put it right back into that ThingworxPlatform folder, exactly the same way as on a regular instance of ThingWorx. However, if the Docker image is being used for a dev ops process, a license may not be necessary. The ThingWorx instance will work and allow development for a time before the trial license expires, which normally will be enough time for developers to make their changes, push those changes to a repository, and tear the container down.   Another thing worth noting about ThingWorx Docker image creation is that the version of Java supplied in the staging folder must match the compatibility requirements for each version of ThingWorx. This is the version of Java used by the container to run the Foundation server. In versions of ThingWorx 9.2+, this means using the Amazon Corretto version of Java. The image absolutely will not start ThingWorx successfully if older versions of Java are used, even if the scripts do successfully build the image.   Also note that in the newer versions of ThingWorx Docker, the ThingWorx Foundation version within the build.env file is used throughout the Docker image creation process. Therefore, while the archive name can be hard-coded to whatever is desired, the version should be left as is, including any additional specifications beyond just the version number. For example, the name of the archive can be given as Thingworx-Platform-H2-9.2.0.zip (a prettier version of the archive name than is used by default), but the PLATFORM_VERSION should still be set to 9.2.0-b30 (which should be how it appears within the build.env file upon download of the ThingWorx Docker files).   Paying attention to every note in the Help Center is critically important to using ThingWorx Docker, as the process is extensive and can become very complicated depending on how the image will be used. However, as long as the volumes are specified and the log files accessible, debugging any issues while bringing up a Docker-contained ThingWorx instance is fairly straightforward.     Credits: Images borrowed from ThingWorx Docker Containerization Tech Talk by Adrian Petrescu
View full tip
ThingWorx Manufacturing Tips & Tricks Webinar is a weekly opportunity to hear PTC Subject Matter Experts present on various topics related to the manufacturing space and applications.   Agenda for this week's recorded session - - Overview and demo of the MFG Apps - Aron Semle, Solution Manager - Licensing Options - Serge Romano, VP Manufacturing -Installation Options and configuration of the apps - Varathan Ranganathan, SME - Q&A  
View full tip
Steps Get the IP address of the ThingWorx Analytics Server Type ip a Put that IP address into the desired web browser Your IP address may be different from the one in the picture above Add the port number of the server to the end of the IP address The Default  port number is 8080 Make sure to put a colon " : " between the end of the IP address and the start of the port number The port number could be different in some cases, depending if it was configured differently during installation Hit Enter and the main page will load.
View full tip
The RabbitMQ Management plugin provides a web-based interface into the inner workings of the messaging bus behind ThingWorx Flow. It is installed by the Flow installers but is an HTTP service by default and is a totally different web server than the NGINX used to front-end ThingWorx Flow. This will describe how to integrate it into the NGINX on your ThingWorx Flow server. This is necessitated by some recent browser behavior changes that make it very hard to get to the http port once you've used an https service on the same machine from the same browser.   First - let's find the user name and password for the RabbitMQ Management plugin. On a Linux server, the file /etc/rabbitmq/definitions.json will hold the name and password for the plugin's UI:         "users": [{                 "name": "flowuser",                 "password": "1780edc6b8628ace2ace72465cdc7b048c88",                 "tags": "administrator"         }],   On a Windows server, the definitions.json file can be found under [flow install location]\modules\RabbitMQ.   Of course, access to these directories should be limited.   Second - let's integrate the plugin into NGINX The best way to integrate the plugin into Flow is to let NGINX reverse proxy to the other http server running the UI for the plugin, which is exactly what happens for Thingworx itself. That way, only NGINX has to be configured for https and no other ports need to be opened to allow access to the plugin.   You need to find the file vhost-flow.conf on your system. On Linux, this will be /etc/nginx/conf.d/vhost-flow.conf. On Windows, it will be at C:\Program Files\nginx-[version]\conf\conf.d\vhost-flow.conf by default. Add the following fragment after the last location xxx {…} segment in the file:       # deal with the rabbitMQ admin tool     location ~* /rabbitmq/api/(.*?)/(.*) {         proxy_pass http://127.0.0.1:15672/api/$1/%2F/$2?$query_string;         proxy_buffering                    off;         proxy_set_header Host              $http_host;         proxy_set_header X-Real-IP         $remote_addr;         proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;     }       location /rabbitmq {         rewrite ^/rabbitmq$ /rabbitmq/ permanent;      }       location ~* /rabbitmq/(.*) {         rewrite ^/rabbitmq/(.*)$ /$1 break;         proxy_pass http://127.0.0.1:15672;         proxy_buffering                    off;         proxy_set_header Host              $http_host;         proxy_set_header X-Real-IP         $remote_addr;         proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;     }   This makes the request for /rabbitmq get pushed over to the web server at port 15672 on the Flow server.   Test the updated config file with (nginx may not exist in your normal path): nginx -t   Restart the NGINX service: Linux (one of these will work depending upon your Linux version): systemctl restart nginx service nginx restart Windows: Net stop ThingWorxOrchestrationNginx Net start ThingWorxOrchestrationNginx -or- use the Services app to restart the service   Thanks to https://groups.google.com/forum/#!topic/rabbitmq-users/l_IxtiXeZC8 for the needed config changes.   You can now use https://yourserver/rabbitmq to get to the login page for the management plugin. Login with the user and password from the definitions.json file on your system and you can now monitor the behavior of your RabbitMQ environment.
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
  Utilizing the ThingWorx monitoring system   Guide Concept   Being able to view your logs is an important part of knowing what is happening in your system. You can't keep things secure if you don't know who is doing what.   These concepts and provide you with ways to find information about what is going on in your application and system.   We will teach you how to access the monitoring panel and help keep your application running the way you need it to.     You'll learn how to   How to log data and capture useful information How to filter and find what you need in the monitoring pages.   NOTE:  The estimated time to complete this guide is 30 minutes     Step 1: Example and Strategy   If you’d like to skip ahead, download the completed example of the Aerospace and Defense learning path: AerospaceEntitiesGuide1.zip. Extract, then import the .twx files included.   In the last guide, we ended with viewing the Monitoring section of ThingWorx. What you’ll now learn is a more detailed method to knowing exactly what is happening in your application. Part of the magic in logging comes from informative statements and how you perform your error handling.   We will create an Entity with a number of services that will allow us to see what is happening in our application. In order to automate these services and view the logs that they generate, we’ll be using Schedulers. When we have everything running, we’ll go over how to utilize the Monitoring section to its strengths.     Step 2: Utilizing Logging Functions   Errors happen. Sooner or later, there will be an error. When you have an error in your application, you need to know about it and have information to help resolve the problem. There are 5 main logging functions under the universal logger object – info, debug, trace, warn, and error.   Function  Description  info Used for high level and general information. debug Used for debugging an application and capturing data. trace Used for low level and tracking what is happening. warn Used to warn for possible errors or problems. error Used for showing an error in a log.   Let’s create a Thing that performs logging for different data types (complex types will be handled in the next section).   1.  In the ThingWorx Composer, click the + New button in the top left.  2. In the dropdown list, click Thing.   3. In the Name field, give our agency name, such as  LoggingServices. 4. In the Base Thing Template field, select GenericThing. 5. Select a Project, such as PTCDefaultProject. Set this as the default context if you wish.   6. Click Save.     Let's start creating our services.   1. Click the Services tab. 2. Click the Add button to create a new JavaScript service.   3. Enter LogInformation in the Name field of the service. 4. Enter the below lines as the code for the service.   logger.trace("LoggingServices.LogInformation(): Entering Service."); logger.info("LoggingServices.LogInformation(): Logging at Information Level"); try { var x = 10; var y = 20; var z = 0; logger.debug("LoggingServices.LogInformation(): Logging Addition - " + (x + y)); logger.debug("LoggingServices.LogInformation(): Logging Division - " + (y / z)); } catch(error) { logger.error("LoggingServices.LogInformation(): Error - " + error); } logger.trace("LoggingServices.LogInformation(): Ending Service."); 5. Click the Save and Continue button. Hit the Execute button.   When triggered, this service will print the trace information for the service, the information level logging, the sum of the x and y variables, and our error message (because of our division by 0 attempt). In the next section, we’ll cover the log file types and how to filter to get what we want.       Step 3: Logging Complex Objects   As you know, with JavaScript, objects can be seen as JSON. That being said, The JSON.stringify function is very handy in printing out complex objects in ThingWorx (NOTE: In order to print out an InfoTable, use <infotable>.toJSON() ). Let’s create a new service to log more complex types.   1.     Open our LoggingServices Thing that we created in the earlier sections. 2.     Click the Services tab. 3.     Click the Add button to create a new service. 4.     Enter LogComplexObjects  in the Name field. 5.     Enter the below statements into the code area.   logger.trace("LoggingServices. LogComplexObjects (): Entering Service."); logger.info("LoggingServices. LogComplexObjects (): Logging at Information Level"); try { var x = {}; x.y = 20; x.z = 0; logger.debug("LoggingServices. LogComplexObjects (): Logging Addition - " + (x + y)); logger.debug("LoggingServices. LogComplexObjects (): Logging Division - " + (y / z)); } catch(error) { logger.error("LoggingServices. LogComplexObjects (): Error - " + error); } logger.trace("LoggingServices. LogComplexObjects (): Ending Service.");   After you execute this service, you should be able to see the outcome in the ScriptLog. Try utilizing some of the filtering methods we mentioned in order to find the log statements.       Step 4: Viewing and Filtering Logs   In the table below, you’ll see what each of the logs showcase. Keep in mind, while some of these logs are accessible in the ThingWorx composer, you can view all of these logs on the server in the /ThingworxStorage/logs directory.   Log Description   Application Log  The Application Log contains all of the messages that ThingWorx logs while operating. Depending on your settings, this log can display errors only or every execution of the platform.  Communication Log  The Communication Log contains all communication activity with ThingWorx.  Configuration Log  The Configuration Log contains all of the messages that the ThingWorx application generates for any create, modify, and delete done in ThingWorx. For example, if a Thing or Mashup is created, modified, or deleted, that information will be included in the ConfigurationLog.  Database Log  The DatabaseLog contains all messages related to database activity.  Script Log  The ScriptLog contains all of the messages that the ThingWorx application generates while executing JavaScript services. You can use logger.warn(or logger.info, logger.trace, logger.debug, logger.error). By default, the log displays warnings and errors, so we recommend using the warn function to log this information from the services you are running. Generally, ThingWorx will only publish errors that were incurred while running a service to this log.  Security Log  The Security Log contains all of the messages that the ThingWorx application generates regarding users. This information can include login data and page requests depending on the log level.  Script Error Log  The Script Error Log contains the stack trace for scripts created in the platform and is only available in the ScriptErrorLog.log file. Not accessible in the Composer.   Let’s go to the Monitoring section of the ThingWorx Composer and view the results from the server we triggered earlier.   1.     Click on the Monitoring section on the left-hand side of the Composer.   2.     Click on ScriptLog. Based on your role on this team, this might be the log that you view the most.     3.     You should be able to see or find the logs we recently generated from executing our new service.   The logging views in Composer will generally show you the last 24-hour period. This can be helpful if you know something ran within the last day. Based on the amount of logging your system performs, that can be way too large a window. Let’s start playing with the filters and searches.       # Function  1 The search text box allows you to search for specific words in the log based on your time window. Once you have entered the word, hit the **ENTER/RETURN key on your keyboard. 2 The filter button provides a menu that provides more features to filter the logs. See below. 3 The configure button provides a pop-up that allows you to filter incoming log statements based on a certain logging level. See below. 4 The date range will default to the last 24 hours but can be updated to a wider or smaller window. You will need to click the Apply button when ready. 5 The max rows field provides how many records will come back in the view. The search will continue until it hits this number, or your date range is met. You can shorten this field for faster searching or increase it up to 1000 to showcase more information. You will need to click the Apply button when ready. 6 Apply and Reset buttons. Apply will run the search you currently have on the screen. Reset will reset the date range and max rows ONLY. 7 Auto Refresh allows for the logs to continue rolling in based on your current filters. Think of this as a specified **tail command on a server log. 8 This grid provides the actual logging information. The information provided here can be used to also help filter what you’re looking for. If you see specific thread has the data you need, use the filter menu (2) to filter based on that thread. You can also click on these items to view the log statement clearer. 9 This log message field provides the message from the grid (8) entry clicked on.     The Filter Menu   The filter menu provides very helpful filters. See below for some information on how it works.   # Function 1 This User field allows you to filter the logs to the user that ran the service or function. This is helpful when you know a specific person has logged in and having problems with your application. 2 The Origin field can help when trying to zero in your search. If you notice in the log message grid, there is an origin field. Use this field to help fine tune your search. 3 The Thread field is amazing when you have multiple processes running and would like to see the logs from one process only. You’ll need to find a log entry before you know which thread to filter on. 4 This is similar to the Origin field in the sense that once you find the instance value for a log entry, it can be used here to help filter. 5 The log level range is exactly as the name states. It will help filter message based on logging level.     The Configure Pop-up     The configure pop-up helps with logging going forward. If you only want to see debug level information or you’d like to see everything down to the trace messages, configure it here and you’ll see the change as new messages come in.     Step 5: Next Steps   Congratulations! You've successfully completed the Tracking Activities and Stats guide, and learned how to use the ThingWorx Platform to help track what is happening in your application.   Learn More   We recommend the following resources to continue your learning experience:    Capability     Guide Build Design Your Data Model Manage Data Model Implementation Guide   Additional Resources   If you have questions, issues, or need additional information, refer to:    Resource       Link Community Developer Community Forum Support REST API Help Center
View full tip
This post is part of the series Forced Root Cause Monitoring via Mashups and Modal Popups To not feel lost or out of context, it's recommended to read the main post first. Create the Main Mashup Create a new Mashup called "rcp_MashupMain" as Page and Responsive Save and switch to the Design tab Design Add a Layout with two Columns In the right Column add another Layout (vertical) with a Header and one Row Add a Grid to the Row Add a Panel to the Header Add a Panel into the Panel (we will use a Panel-In-Panel technique for a better design experience) Set "Width" to 200 Set "Height" to 50 Set "Horizontal Anchor" to "Center" Set "Vertical Anchor" to "Middle" Delete its current "Style" and add a new custom style - all values to default (this will create a transparent border around the panel) Add a Label to the inner Panel Set "Text" to "Historic data of what went wrong" Set "Alignment" to "Center Aligned" Set "Width" to 200 Set "Top" to 14 Add a Panel to the left Column Add a Navigation Widget to the Panel This will call the Popup Window when its Navigate service is invoked (by a Validator) Set "MashupName" to "rcp_MashupPopup" Set "TargetWindow" to "Modal Popup" Set "ShowCloseButton" to false Set "ModalPopupOpacity" to 0.8 (to make the background darker and give more visual focus to the popup) Set "FixedPopupWidth" to 500 Set "FixedPopupHeight" to 300 Set "PopupScrolling" to "Off" Set "Visible" to false, so it will not be shown to the user during runtime Add a Textbox to the Panel This will show the numeric value corresponding to the State selected in the modal popup This will just be used for displaying with no other functionality - so that we can verify the actual values chosen Set "Read Only" to true Set "Label" to "Selected Reason (numeric value)" Add a Checkbox to the Panel This will be used an input for the Validator to determine if an error state is present or not Set "Prompt" to "Set this box to 'true' to trigger the popup. Set the value via the Thing to simulate a service. Once the value is set, the trigger is set to 'false' as the popup has been dealt with. A new historic entry will be created." Set "Disabled" to true Set "Width" to 250 Add a Validator to the Panel This will determine if the checkbox (based on the trigger / error state) is true or false. If the checkbox switches to true then the validator will call the Navigate service on the Navigation Widget. Otherwise it will do nothing. Click on Configure Validator Add Parameter Name: "Input" Base Type: BOOLEAN Click Done Set "Expression" to "Input" (the Parameter we just created) Set "AutoEvaluate" to true Save the Mashup Data In the Data panel on the right hand side, click on Add entity Choose the "rcp_AlertThing" and select the following services GetProperties (execute when Mashup is loaded) SetProperties QueryPropertyHistory (execute when Mashup is loaded) clearTrigger Click Done and the services will appear in the Data panel Connections After configuring the UI elements and the Data Sources we now have to connect them to implement the logic we decided on earlier GetProperties service Drag and drop the trigger property to the Checkbox and bind it to State Set the Automatically update values when able to true SetProperties service From the Navigation Widget drag and drop the selectedState property and bind it to the SetProperties service selectedReason property From the Navigation Widget drag and drop the PopupClosed event and bind it to the SetProperties service From the SetProperties service drag and drop the ServiceInvokeCompleted event and bind it to the clearTrigger service From the SetProperties service drag and drop the ServiceInvokeCompleted event and bind it to the QueryPropertyHistory service QueryPropertyHistory service Drag and drop the Returned Data's All Data to the Grid and bind it to Data On the Grid click on Configure Grid Columns Switch the position of the timestamp and selectedReason fields with their drag and drop handles For the selectedReason Set the "Column Title" to "Reason for Outage" Switch to the Column Renderer & State Formatting tab Change the format from "0.00" to "0" (as we're only using Integer values anyway) Choose the State-based Formatting Set "Dependent Field" to "selectedReason" Set "State Definition" to "rcp_AlertStateDefinition" Click Done clearTrigger service There's nothing more to configure for this service As the properties will automatically be pushed via the GetProperties service, there's no special action required after the service invoke for the clearTrigger service has been completed Validator Widget Drag and drop the Validator's TRUE event to the Navigation Widget and bind it to the Navigate service Drag and drop the Checkbox State to the Validator and bind it to the Input parameter Navigation Widget Drag and drop the Navigation Widget's selectedState to the Textbox and bind it to the Text property Save the Mashup
View full tip
This example shows how a file can be retrieved via Scripto and then displayed on a Web page. Precondition is that an asset has an uploaded file. This script assumes the file is there and that it is not extremely large (under 1 megabyte). This example uses base64 encoding to convert the file into a string. Future versions of Scripto will support other data streams so that base64 encoding will not be necessary. import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.data.UploadedFile import com.axeda.drm.sdk.data.UploadedFileFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.device.DeviceFinder // This script requires parameter "id" Context ctx = Context.create(parameters.username); def response = '' try {     DeviceFinder deviceFinder = new DeviceFinder(ctx, new Identifier(parameters.id as Integer));     Device device = deviceFinder.find();     UploadedFileFinder uff = new UploadedFileFinder(ctx)     uff.device = device     uff.hint = 'photo'     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         File f = ufile.extractFile()         response = getBytes(f).encodeBase64(false).toString()     } } catch (Exception e) {     logger.info(e.message);     response = [             faultcode: 'Groovy Exception',             faultstring: e.message     ]; } return ['Content-Type': 'data:image/png;base64', 'Content': response]; static byte[] getBytes(File file) throws IOException {     return getBytes(new FileInputStream(file)); } static byte[] getBytes(InputStream is) throws IOException {     ByteArrayOutputStream answer = new ByteArrayOutputStream(); // reading the content of the file within a byte buffer     byte[] byteBuffer = new byte[8192];     int nbByteRead /* = 0*/;     try {         while ((nbByteRead = is.read(byteBuffer)) != -1) { // appends buffer             answer.write(byteBuffer, 0, nbByteRead);         }     } finally {         is.close()     }     return answer.toByteArray(); }
View full tip
Time series prediction uses a model to predict future values based on previously observed values. Time series data differs somewhat from non-time series data in both the formatting of the data and the training of predictive models. This article will highlight several important considerations when dealing with time series data. Preparing Time Series Data: The data must contain exactly one field with Op Type “TEMPORAL” and one field with Op Type “ENTITY_ID”, which defines the identifier for an entity, such as a machine serial number. The ENTITY_ID field should remain the same as long as there are no missing timestamps and it is within the same asset but should be different for different assets or asset runs in order to accurately assign history during model training and scoring.     The TEMPORAL field is a numeric field indicating the order of the data rows for a specific entity . One should also ensure that data is formatted such that the timestamps are equally spaced (for example, one data point every minute) and that no gaps exist in the sequence of numbers.   If there are gaps in the time series data, it is recommended to restart the series after the gap as a new entity. Alternatively, if the gap is small enough (few data points), linear interpolation based on the gap endpoint values within the same entity is generally acceptable.   Model Creation in Time Series: When creating a timeseries model in Analytics Builder, you will be asked to specify a lookback size and lookahead parameter. The lookback size determines how many historical datapoints (including the current row) will be used in the model. The lookahead indicates how many time steps ahead to predict.  If the value of the goal variable is not known at time of scoring, unchecking Use Goal History will use the goal column during training but not its history during scoring.   Time Series models can also be created in Services using the Training Thing. The lookback size and lookahead parameter are specified in the CreateJob service. The virtualSensor field is used to indicate if the model should be trained to predict values for a field that will not be available during scoring. For example, one can train a time series model to predict Volume using evolving Temperature and Pressure, based on sensor data for these three variables over a period of time. However, the Volume sensor may be removed from further assets in order to reduce costs, and the predictive model can be used instead.   Two important considerations: ThingWorx Analytics will expand historical data in the time series into new columns. This process creates new features using the values of the previous time steps. Additionally, low order derivatives, together with average and standard deviation features are computed over small contiguous subgroups of the historical data.   The expansion process can make the dataset exceptionally wide, so time series training is generally significantly slower compared to training with no history on the same dataset. This gets exacerbated when lookback size = 0 (auto-windowing, a process where the system is trying to find the optimal lookback). If there are columns that are not changing or change infrequently (such as a device serial number or zip code of the device’s location), these should be marked as Static when importing the data. Any columns labeled Static will not be expanded to create new features. Care also needs to be taken to exclude any features that are known to not be relevant to the prediction. Using a large lookback can eliminate how many examples / entities the model has available to train. For example, if a lookback of 8 is used, then any entities that have less than 8 examples will not be used in training. For the same reason, scoring for time series produces less results than the number of rows provided as input: if 10 rows are provided and lookback is 6, then only 5 predictions will be produced.
View full tip
This script illustrates how to call a Groovy script as an external web service.  This example also applies to calling any external web service that relies on a username and password. Parameters: external_username external_password script_name import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import com.axeda.drm.sdk.device.DataItem import net.sf.json.JSONObject import com.axeda.drm.sdk.device.ModelFinder import groovyx.net.http.* import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* /** * CallScriptoAsExternalWebService.groovy * * This script illustrates how to call a Groovy script as an external web service. * * @param external_username       -   (REQ):Str Username for the external web service. * @param external_password       -   (REQ):Str Password for the external web service. * @param script_name             -   (REQ):Str Script Name to call. * * */ def result try { validateParameters(actual: parameters, expected: ["external_username", "external_password", "script_name"]) // authentication tokens (username + password) def auth_tokens = [username: parameters.external_username, password: parameters.external_password] http = new HTTPBuilder( 'http://platform.axeda.com/services/v1/rest/Scripto/execute/'+parameters.script_name ) // pass in dummy parameters to the script for illustration def parammap = [key1: "val1", key2: "val2"] // Call the script     http.request (GET, JSON) {       uri.query = auth_tokens + parammap       response.success = { resp, json ->         // traverse the wrapped json response     result = json.wsScriptoExecuteResponse.content.$          }       response.failure = { resp ->         result = response.failure       }      } } catch (Throwable any) {     logger.error any.localizedMessage } return ['Content-Type': 'application/json', 'Content': result] static def validateParameters(Map args) {     if (!args.containsKey("actual")) {         throw new Exception("validateParameters(args) requires 'actual' key.")     }     if (!args.containsKey("expected")) {         throw new Exception("validateParameters(args) requires 'expected' key.")     }     def config = [             require_username: false     ]     Map actualParameters = args.actual.clone() as Map     List expectedParameters = args.expected     config.each { key, value ->         if (args.options?.containsKey(key)) {             config[key] = args.options[key]         }     }     if (!config.require_username) { actualParameters.remove("username") }     expectedParameters.each { paramName ->         if (!actualParameters.containsKey(paramName) || !actualParameters[paramName]) {             throw new IllegalArgumentException(                     "Parameter '${paramName}' was not found in the query; '${paramName}' is a reqd. parameter.")         }     } }
View full tip
This is the Second Part of Getting Started wth ThingWorx Analytics. In this video,we would be using Postman.   During this video you will learn:   -Creating a Dataset -Entering the Dataset configuration -Uploading the CSV data File to TWA Server   Updated Link for access to this video:  Getting Started with ThingWorx Analytics Part-2
View full tip
This video shows the commands to execute to deploy the training and results microservices as docker container. This is based on Docker Toolbox to highlight the specific settings required on Toolbox.   Updated Link for access to this video:  Deploying Training & Result Microservices via Docker Containers for Anomaly Detection
View full tip
ThingWorx community,   As part of a continuous re-evaluation of our third-party software requirements, we regularly add and remove support for versions of operating systems, persistence providers, and web browsers.   On this note, we are planning to end support for Ubuntu 18.04 beginning with the ThingWorx release targeted for mid-CY2022. Per its release cycle, Ubuntu will move version 18.04 into Extended Security Maintenance at this point, meaning it will no longer receive regular maintenance updates.   PTC will continue to support Ubuntu 20.04 for the immediate future and will consider supporting Ubuntu 22.04 once it becomes Generally Available (GA).   Please let me know if you have any questions or concerns.   Regards,   Walter Haydock ThingWorx Product Management
View full tip
Help the ThingWorx product team with some key strategic questions about developing apps in the cloud!   Let us know what you think here!   Stay connected, Kaya
View full tip
This Groovy script is called from an Expression Rule of type Location. For example, in an Expression rule ExecuteCustomObject("SendTweetWithLocation","user","password","Asset is on the move")   calls the script "SendTweetWithLocation" with the parameters in order. The twitterStatus is the text to send to twitter. Use the user/password for an actual twitter account.  Also, the script uses the implicit objects context and mobileLocation. Parameters Variable Name      Display Name twitterUser                twitterUser twitterPassword      twitterPassword twitterStatus            twitterStatus import groovyx.net.http.RESTClient import static groovyx.net.http.ContentType.* import com.axeda.drm.sdk.geofence.Geofence twitter = new RESTClient('http://twitter.com/statuses/') twitter.auth.basic parameters.twitterUser, parameters.twitterPassword twitter.client.params.setBooleanParameter 'http.protocol.expect-continue', false def statusText = "'${parameters.twitterStatus}' for device: ${context.device.serialNumber} on ${new Date()}" resp = twitter.post(path: 'update.xml',         requestContentType: URLENC,         body: [status: statusText, lat: mobileLocation.lat, long: mobileLocation.lng]) logger.info resp.status logger.info "Posted update $statusText"  
View full tip
Announcements