cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
ThingWorx Docker Overview and Pitfalls to Avoid    by Tori Firewind of the IoT EDC Containers are isolated and can run side-by-side on the same machine, but they share the host OS, making them more efficient in terms of memory usage and scalability.   Docker is a great tool for deploying ThingWorx instances because everything is pre-packaged within the Docker image and can be stored in a repository ready for deployment at any time with little configuration required.  By using a different container for every component of an application, conflicting dependencies can be avoided. Containers also facilitate the dev ops process, providing consistent application deployments which can be set up, taken down, and tested automatically using scripts.   Using containers is advantageous for many reasons: simplified configuration, easier dev ops management, continuous integration and deployment, cost savings, decreased delivery time for new application versions, and many versions of an application running side-by-side without any wasted resources setting them up or tearing them down. The ThingWorx Help Center is a great resource for setting up Docker and obtaining the ThingWorx Docker files from the PTC Software Downloads website. The files provided by PTC handle the creation of the image entirely, simplifying the process immensely. All one has to do is place the ThingWorx version and all of the required dependencies in the staging folder, configure the YML file, and run the build scripts. The Help Center has all of the detailed information required, but there are a few things worth noting here about the configuration process.   For one thing, the platform-settings.json file is generated based on the options given in the YML file, so configuration changes made within this configuration file will not persist if the same options aren’t given in the YML file. If using Docker Desktop to run an image on a Windows machine, then the configuration options must be given in an ENV file that can be referenced from the command used to start the image. The names of the configuration parameters differ from the platform-settings.json file in ways that are not always obvious, and a full list can be found here.   For example, if extension imports need to be enabled on a ThingWorx instance running in Docker, then the EXTPKG_IMPORT_POLICY_ENABLED option must be added to the environment section of the YML file like this:     environment: - "CATALINA_OPTS=-Xms2g -Xmx4g" # NOTE: TWX_DATABASE_USERNAME and TWX_DATABASE_PASSWORD for H2 platform must # be set to create the initial database, or connect to a previous instance. - "TWX_DATABASE_USERNAME=dbadmin" - "TWX_DATABASE_PASSWORD=dbadmin" - "EXTPKG_IMPORT_POLICY_ENABLED=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JARRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_CSSRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_JSONRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_WEBAPPRES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_ENTITIES=true" - "EXTPKG_IMPORT_POLICY_ALLOW_EXTENTITIES=true" - "EXTPKG_IMPORT_POLICY_HA_COMPATIBILITY_LEVEL=WARN" - "DOCKER_DEBUG=true" - "THINGWORX_INITIAL_ADMIN_PASSWORD=Pleasechangemenow"   Note that if the container is started and then stopped in order for changes to the YML file to be made, the license file will need to be renamed from "successful_license_capability_response.bin" to "license_capability_response.bin" so that the Foundation server can rename it. Failing to rename this file may cause an error to appear in the Application Log, and the server to act as if no license was ever installed: "Error reading license feature info for twx_realtime_data_sub".   In Docker Desktop on a Windows machine, create a file called whatever.env and list the parameters as shown here: Then, reference this environment file when bringing up the machine using the following command in Powershell:      docker run -d --env-file h2.env -p 8080:8080 -v ${pwd}/ThingworxPlatform:/ThingworxPlatform -v ${pwd}/ThingworxStorage:/ThingworxStorage -it <image_id>     Notice in this command that the volumes for the ThingworxPlatform and ThingworxStorage folders are specified with the “-v” options. When building the Docker image in Linux, these are given in the YML file under the volumes section like this (only change the path to local mount on the left side of the colon, as the container mount on the right side will never change):      volumes: - ./ThingworxPlatform:/ThingworxPlatform - ./ThingworxStorage:/ThingworxStorage - ./tomcat-logs:/opt/apache-tomcat/logs     Specifying the volumes this way allows for ThingWorx logs and configuration files to be accessed directly, a crucial requirement to debugging any issues within the Foundation instance. These volumes must be mapped to existing folders (which have write permissions of course) so that if the instance won’t come up or there are any other issues which require help from Tech Support, the logs can be copied out and shared. Otherwise, the Docker container is like a black box which obscures what is really going on. There may not be any errors in the Docker logs; the container may just quit without error with no sign of why it won’t stay up. Checking the ThingWorx and Tomcat logs is necessary to debugging, so be sure to map these volumes correctly.   Once these volumes are mapped and ThingWorx is successfully making use of them, adding a license file to the Docker instance is simple. Use the output in the ThingworxPlatform folder to obtain the device ID, grab a valid license file, and put it right back into that ThingworxPlatform folder, exactly the same way as on a regular instance of ThingWorx. However, if the Docker image is being used for a dev ops process, a license may not be necessary. The ThingWorx instance will work and allow development for a time before the trial license expires, which normally will be enough time for developers to make their changes, push those changes to a repository, and tear the container down.   Another thing worth noting about ThingWorx Docker image creation is that the version of Java supplied in the staging folder must match the compatibility requirements for each version of ThingWorx. This is the version of Java used by the container to run the Foundation server. In versions of ThingWorx 9.2+, this means using the Amazon Corretto version of Java. The image absolutely will not start ThingWorx successfully if older versions of Java are used, even if the scripts do successfully build the image.   Also note that in the newer versions of ThingWorx Docker, the ThingWorx Foundation version within the build.env file is used throughout the Docker image creation process. Therefore, while the archive name can be hard-coded to whatever is desired, the version should be left as is, including any additional specifications beyond just the version number. For example, the name of the archive can be given as Thingworx-Platform-H2-9.2.0.zip (a prettier version of the archive name than is used by default), but the PLATFORM_VERSION should still be set to 9.2.0-b30 (which should be how it appears within the build.env file upon download of the ThingWorx Docker files).   Paying attention to every note in the Help Center is critically important to using ThingWorx Docker, as the process is extensive and can become very complicated depending on how the image will be used. However, as long as the volumes are specified and the log files accessible, debugging any issues while bringing up a Docker-contained ThingWorx instance is fairly straightforward.     Credits: Images borrowed from ThingWorx Docker Containerization Tech Talk by Adrian Petrescu
View full tip
Remote Timeouts Some Notes: Format Units: unit of measure for the timeout or limit (seconds, milliseconds, cycles, etc.) Description: describes the timeouts Outcome: describes the default behavior if a timeout or limit is reached. Related Timeouts: lists other timeouts that are closely related to the timeout in question, meaning they should be configured together because one timeout will affect another timeout Notes This guide is heavily focused on the C SDK; certain timeouts may have different names in other SDK's or agents There are no descriptions of any imposed delays or timeouts related to thread pools on the ThingWorx Platform Local timeouts (not related to remote requests) were intentionally not added There are far too many applications to provide detail about every situation introduced by every timeout, but this should provide a good starting point for custom timeout configuration Edge socket_read_timeout Units: milliseconds Description: used to free the socket mutex allowing another service to read on the socket. Increasing this value is beneficial in low resource systems, but could lead to slower performance Outcome: socket read retry Related timeouts: ssl_read_timeout ssl_read_timeout Units: milliseconds Description: If a partial record is read but not saved, it is possible to remove part of an ssl record that would have otherwise been essential in decrypting the entire record. This timeout is used to prevent this situation; it will allow a function to re-acquire the socket mutex in the event that a partial ssl record was captured but the socket_read_timeout was reached. Outcome: websocket read retry Related timeouts: socket_read_timeout frame_read_timeout Units: milliseconds Description: essentially an idle socket timeout. If an edge device requests a message from the ThingWorx Platform, and nothing is read after the request for the time value specified in this property (not even request headers/ssl header), then the websocket is assumed to be experiencing an error and the connection is closed Outcome: websocket disconnect Related timeouts: message_timeout message_timeout Units: milliseconds Description: the max overall message time that that the edge will wait for a full response during a particular request to the ThingWorx Platform. This timeout can be overridden by the frame read timeout if there is no activity on the socket for a given expected response period Outcome: websocket disconnect Related timeouts:frame_read_timeout, pingpong_timeout, Message Timeout (WSCommunication subsystem) pingpong_timeout Units: milliseconds Description: the ping and pong messages are the heartbeat of the AlwaysOn protocol. If a pong is not received <pingpong_timeout> ms after the ping is sent, the websocket will disconnect even if there are successful messages during the ping/pong period. If a pong is received DURING the read loop of another service, the pongs will be routed to the pong manager and recorded to prevent a pong timeout Outcome: websocket disconnect Related timeouts: message_timeout connect_timeout Units: milliseconds Description: when attempting to connect to the ThingWorx Platform, the connection and authentication will wait on an idle socket for the specified number of seconds before closing the connection and retrying Outcome: close socket and attempt to reconnect Related timeouts: connect_retries, Auth Message timeout (WSCommunication subsystem) connect_retries Units: integer (number of tries, not actually a time measurement) Description: not actually coupled to an explicit time value. sets the max number of reconnect_timeouts that the edge will tolerate before giving up. In certain SDK's -1 will correspond to an infinite number of retries. Outcome: stop attempting to reconnect Related timeouts: connect_timeout, Auth Message timeout (WSCommunication subsystem) file_xfer_timeout Units: milliseconds Description: If a file transfer to an edge device becomes idle for too long, this timeout will trigger an error and free the memory associated with the file in the program (this does not delete files on disk in case they are to be resumed later) Outcome: the file transfer is stopped, an error is reported, and associated file transfer memory objects are free'd Related timeouts: File Transfer Idle Timeout, Copy Timeout (Service) ThingWorx Platform Related Services Units: milliseconds Description: some timeouts are passed in as a parameter to a service on an edge device (SendFile, twApi_InvokeService, etc.). These timeouts will act similarly to the WSCommunication message timeout, but they are driven from an edge device instead of the ThingWorx Platform. Outcome: timeout error reported Related timeouts: message_timeout, frame_read_timeout Remote Thing (ThingWorx Platform) Service Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote services. These values will override the default message timeout that is set in the WSCommunication subsystem Outcome: service execution error Related timeouts: Property Timeout Property Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote properties. These values will override the default message timeout that is set in the WSCommunication subsystem. Outcome: property get/set error Related timeouts: Service Timeout ThingWorx Platform Subsystems WSCommunicationSubsystem Idle Connection Timeout Units: seconds Description: if a particular websocket connection has not received or sent a message in the specified time, the connection is assumed to be invalid. The ThingWorx Platform will unbind any related things then disconnect the websocket. This should be set higher than the pingpong_timeout value Outcome: websocket disconnect Related timeouts: pingpong_timeout Auth Message Timeout Units: seconds Description: when a websocket first connects (before binding) the connection will be allowed to stay open for the specified time interval without authenticating. Increasing this value will accommodate high latency devices, but the ThingWorx Platform will be more vulnerable to saturating its own connections with unauthorized websockets. Outcome: websocket disconnect Related timeouts: connect_timeout Message Response Timeout Units: seconds Description: the max amount of time that is allowed during an edge request before claiming the service result as a failure Outcome: property get/set or service execution error Related timeouts: message_timeout (edge) TunnelSubsystem Startup Tunnel Timeout Units: seconds Description: once a remote tunnel is opened it will be given a specified time interval to establish an end-to-end connection before closing. For example, an SSH tunnel is opened but no client is attached to the endpoint Outcome: close tunnel, report error Related timeouts: n/a Idle Tunnel Timeout Units: seconds Description: once a tunnel is established, and an end-to-end connection is established, this will monitor the activity on the socket and report a timeout if there is no read/write activity for the specified time interval Outcome: close tunnel, report error Related timeouts: n/a FileTransferSubsystem File Transfer Idle Timeout Units: seconds Description: when the file transfer subsystem Copy service is executed a series of secondary remote services will be executed to complete the transfer. The File Transfer Idle Timeout will monitor the activity of each secondary service and stop the entire Copy service if any one secondary service records no activity for the specified time interval. Outcome: transfer stopped, error reported Related timeouts: Copy Timeout (Service) Copy (Service) Timeout Units: seconds Description: the number of seconds that the File Transfer Subsystem waits for the completion of a file transfer. This is set every time a transfer is executed. Outcome: transfer stopped, error reported Related timeouts: File Transfer Idle Timeout
View full tip
With ThingWorx, we can already use univariate anomaly alerts (on a single sensor value). However, in many situations, the readings from an individual sensor may not tell you much about the overall issue and a multivariate anomaly detector can be more useful. This post is intended to provide an overview of the Azure Anomaly Detector and how it can be integrated with ThingWorx. The attachment contains: A document with detailed instructions about the setup; A .csv file with the multivariate timeseries dataset; A .twx file with some entities that need to be imported in ThingWorx as well as the CSVParser extension that needs to be installed; A .zip file that will need to uploaded in an Azure Blob Container at some point in the setup
View full tip
5 Common Mistakes to Developing Scalable IoT Applications by Tori Firewind and the IoT EDC Team Introduction To build scalable applications, it’s necessary to identify common mistakes and avoid them at the early stages of development. In an expert session this past month, the PTC Enterprise Deployment Team elaborated on why scalability is important and how to avoid the common development pitfalls in IoT. That video presentation has been adapted here for visual consumption of the content as well.   What is Scalability and Why Does it Matter Enterprise ready applications can scale and easily be maintained, which is important even from day 1 because scalability concerns are the largest cause for delays to Go Lives.  Applications balance many competing requirements, and performance testing is crucial to ensure an application is ready for Go Live. However, don't just test how many remote assets can connect at once, but also any metrics that are expected to increase in time, like the number of remote properties per thing, the frequency of reporting from those properties, or the number of users accessing the system at once. Also consider how connecting more assets will affect the user experience and business logic, and not just the ability to ingest data.   Common Mistake 1: Edge Property Updates Because ThingWorx is always listening for updates pushed from the Edge and those resources are always in use, pulling updates from the Foundation side wastes resources. Fetch from remote every read is essentially a round trip, so it's slower and more memory intensive, but there are reasons to do it, like if the quality tag is needed since the cache doesn't store it. Say a property is pushed at 11:01, and then there's a network issue at 11:02. If the property is pulled from the cache, it will pull the value sent at 11:01 without any indication of there being a more recent value on the Edge device. Most people will use the default options here: read from server cache, which relies on the Edge to push updates, and the VALUE push type, and configuring a threshold is a good idea as well. This way, only those property updates which are truly necessary are sent to the Foundation server. Details on property aspects can be found in KCS Article 252792.   This is well documented in another PTC Community post. This approach is necessary and considered a best practice if there is event logic which depends on multiple properties at once. Sending all of the necessary properties to determine if an event should fire in one Infotable ensures there is no need to query the database each time a property update comes in from the Edge, which ensures independent business logic and reduces the load on the database to improve ingestion performance. This is a very broad topic and future articles will address it more specifically. The When Disconnected property aspect is a good way to configure what happens with Edge property values in a mass disconnect scenario. If revenue depends on uptime, consider losing any data that changes while a device is disconnected. All of the updates can be folded into a single value if the changes themselves aren't needed but an updated value is needed to populate remote properties upon reconnect. Many customers will want to keep all of their data, even when a device is offline and use data stores. In this case, consider how much data each Edge device can store (due to memory limitations on the devices themselves), and therefore how long an outage can last before data is lost anyway. Also consider if Foundation can handle massive spikes in activity when this data comes streaming in. Usually, a Connection Server isn't enough. Remember that the more data needs to be kept, the greater the potential for a thundering herd scenario.   Handling a thundering herd scenario goes beyond sizing considerations. It is absolutely crucial to randomize the delay each device will wait before attempting to reconnect. It should be considered a requirement to have the devices connect slowly and "ramp up" over time for multiple reasons. One is that too much data coming in too fast could overwhelm the ingestion queue and result in data loss. Another is that the business logic could demand so many system resources, that the Foundation server crashes again and again and cannot be recovered. Turning off the business logic it isn't possible if the downtime is unexpected, so definitely rely instead on randomized reconnection times for Edge devices.   Common Mistake 2: Overlooking Differences in HA To accommodate a shared thing model across many servers, changes had to be made in how the thing model is stored and the model tree is walked by the Foundation servers. Model information is no longer cached at the Thing level, and the model tree is therefore walked every time model information is needed, so the number of times a Thing is directly referenced within each service should be limited (see the Help Center for details).   It's best to store whatever information is needed from a Thing in an Infotable, making the Things[thingName] reference a single time, outside of any loops. Storing the property definitions outside of the loop prevents the repetitious Thing references within the service, which otherwise would have occurred twice for each property (for both the name and the description), and then again for every single property on the Thing, a runtime nightmare.   Certain states previously held in memory are now shared across the cluster, like property values, Thing states, and connection statuses. Improvements have been made to minimize the effects of latency on queries, like how they now only return property values on associated Thing Shapes or Thing Templates. Filtering for properties on implementing Things is still possible, but now there is a specific service to do it, called GetThingPropertyValues (covered in detail in the Help Center).    In the script shown above, the first step is a query to get the names of all implementing things of a particular Thing Shape. This is done outside of any loops or queries, so once per service call. Then, an Infotable is built to store what would have been a direct reference to each thing in a traditional loop. This is a very quick loop that doesn't add much by way of runtime since it is all in memory, with no references to the thing model or the database, instead using the results of the first query to build the Infotable. Finally, this thing reference Infotable is passed into the new service GetThingPropertyValues to retrieve all of the property info for all of these things at once, thereby only walking the thing model once. The easiest mistake people would make here is to do a direct thing reference inside of a loop, using code like Things[thingName].Get() over and over again, thereby traversing the thing model repeatedly and adding a lot of runtime. QueryImplementingThingsOptimized is another new service with new parameters for advanced configuration. Searches can now be done on particular networks or to particular depths, and there's an offset parameter that allows for a maximum number of items to be returned starting at any place in the list of Things, where previously if you needed the Things at the end of the list, you had to return all of the Things. All of these options are detailed in the Help Center, as well as the restrictions listed in the image above.    Common Mistake 3: Async Service Misuse   Async services are sometimes required, say if a user has to trigger many updates on many remote things at once by the click of a button on a mashup that should not be locked up waiting for service completion. Too many async service calls, though, result in spikes in activity and competition for resources. To avoid this mistake, do not use async unless strictly necessary, and avoid launching too many async threads in parallel. A thread dump will show how many threads there are and what they are doing.   Common Mistake 4: Thread Pool Overload Adding more threads to the pool may be beneficial in certain circumstances, like if the threads are waiting on other resources to complete their tasks, look stuff up in the database (I/O), or unlock data that can only be accessed one thread at a time (property writes). In this case, threads are waiting on other resources, and not the CPU, so adding more threads to the pool can improve performance. However, too many threads and performance degradation will occur due to increased contention, wasted CPU cycles, and context switches.   To check if there are too many or not enough threads in the pool, take thread dumps and time the completion of requests in the system. Also watch the subsystem memory usage, and note that the side of the queue should never approach the max. Also consider monitoring the overall performance of the system (CPU and Memory) with a tool like Grafana, and remember that a good performance test properly exercises all of the business logic and induces threads in a similar way to real world expectations.   Common Mistake 5: Stream Etiquette Upserts, or updates to database tables, are expensive operations that can interfere with ingestion if they are performed on the wrong tables. This is why Value Stream and Stream data should never be updated by end users of the application. As described in the DGIS document on best practices, aggregation is the key to unlocking optimal performance because it reduces the size of database tables that require upserts. Each data structure shown here has an optimal use in a well-designed ThingWorx application.   Data Tables are great for storing overview information on all of the Things in one view, and queries on this data source are the fastest. Update this data source as often as possible (by timer), allowing enough time for updates to be gathered and any necessary calculations made. Data Tables can also be updated by end users directly because each row locks one at a time during updates. Data Tables should be kept as small as possible to improve performance on mashups, so for instance, consider using one to show all Things per region if there are millions of Things. Roll up information is best stored here to avoid calculations upon mashup load, and while a real-time view of many thousands of things at once is practically impossible, this option allows for a frequently updates overview of many things, which can also drill down to other mashup views that are real-time for one Thing at a time.   Value Streams are best used for data ingestion, and queries to these should be kept to a minimum, largely performed by the roll up logic that populates the Data Tables mentioned above. Queries that chart all of the data coming in are best utilized on individual Thing views so that only a handful of users are querying the same data sources at a time. Also be sure to use start and end dates and make use of the "source" field to improve query performance and create a better user experience. Due to the massive size of the corresponding database tables, it's best to avoid updating Value Streams outside of the data ingestion process altogether.   Streams are similar, but better for storing aggregated, historical data. Usually once per day or per week (outside of business hours if possible), Value Stream data will be smoothed or reduced into less data points and then stored into Streams. This allows for data to be stored for longer periods of time on the server without using up as much memory or hurting query performance. Then the high volume ingested data sources can be purged frequently, as discussed below.   Infotables are the most memory intensive, and are really designed to hold only a small number of rows at a time, usually to facilitate the business logic. Sometimes they will be stored in Streams or Data Tables if they aren't expected to grow larger (see the DGIS Coffee Machine App for an example). Infotables should never be logged; if they are used to transmit Edge property updates (like in the Property Set Approach), they should be processed into other logged (usually local) properties.   Referring to the properties themselves is how to get real-time information on a mashup, say by using the GetProperties service and its auto-update option, which relies on internal websockets. This should be done on individual Thing views only, and sizing considerations need to be made if there will be many of these websockets open at once, say if there are many end users all viewing real-time data at a time.   In the newer versions of ThingWorx, these cannot be updated directly, so find the system object called ThingWorxPersistenceProvider and use the service UpdateStreamDataProcessingSettings. ThingWorx Foundation processes data received from remote devices in batches in order to manage the data flow and reduce database churn. All of these settings configure how large those batches are and how frequently they are flushed to the database (detailed in full in KCS Article 240607). This is very advanced configuration that heavily depends on use case and infrastructure, but some info applies to most people: adjusting the scan rate is usually not beneficial; a healthy queue should never approach the max limit; and defaults differ by database because they function differently. InfluxDB generally works better when there are less processing threads and higher numbers of things per thread, while PostgresDB can have a lot of threads, preferably with less things per thread. That's why the default values shown here are given as the same number of threads (and this can be changed), but Influx has a larger block size and size threshold because it can handle more items per thread. Value Streams ingest all data into the Foundation server, and so the database tables that correspond with these data sources grow very large, very quickly and need to be purged often and outside of business hours, usually once a day or once per week. That's why it's important to reduce the data down to less points and push them into Streams for historical reference. For a span of years, consider a single point a day might be enough, for a span of hours, consider a data point a minute. Push aggregated data into Streams and then purge the rest as soon as it is no longer needed.   In Conclusion
View full tip
  ThingWorx 9.2 is here! Deploy an entire solution and all its dependencies in one click with Solution Central’s one-click deploy, garner deeper analytic insight with our new waterfall charts, and manage and authenticate users more seamlessly with an Azure Active Directory integration. Discover these features and more in my 9.2 preview post here!   Review our release notes here and be sure to upgrade to 9.2!   Stay connected, Kaya
View full tip
We will host a live Expert Session: "5 Common Mistakes for Developing Scalable IoT Applications" on June 22nd, 11h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: 5 Common Mistakes for Developing Scalable IoT Applications Date and Time: June 22nd, 11h00 EST Duration: 1 hour Host: Tori Firewind, Mike Jasperson and Prachi Rath - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/5-common-dev-mistakes-for-scalable-iot-applications    Description: To build scalable applications, it’s necessary to identify the common mistakes made and ensure to avoid them at the early stages of development.   In this expert session, the PTC Enterprise Deployment Team will elaborate on why scalability is important and how one can avoid the common development pitfalls in IoT.    Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
  Hi, everyone!     Today, we’re launching an exciting new series called “PTC Community Spotlights.” Each post in the series explores a community member’s experience with ThingWorx—how they’re using it, what their favorite part about ThingWorx is, and any tips or tricks they may have to share with the PTC Community.   For the first installment, I spoke with @nmilleson of EAC. Check out our conversation below. Our first PTC Community Spotlight Speaker -- Nick Milleson of EAC Product Development Systems. @Kaya: Hi, @nmilleson, welcome! Thank you for taking the time to meet with me and volunteering to be our first ThingWorx Community spotlight!   @nmilleson: Of course, @Kaya. Happy to be here.   @Kaya: To start, can you tell me a little about yourself?   @nmilleson: Absolutely. My name is Nick Milleson.  I work as an IoT Solution Architect at EAC Product Development Systems (a PTC Partner). I’m located in Apple Valley, Minnesota, which is a suburb of the Twin Cities.   @Kaya: Nice! We always love hearing from our partners about the awesome work they do. As a PTC Partner, what industries do you typically work in?   @nmilleson: I consult for many, many different industries, including defense, transportation, medical devices, construction & aerospace.   @Kaya: Wow, so what PTC products are you most familiar with?   @nmilleson:  My schooling is in mechanical engineering, so I’ve also used Creo, Windchill, and MathCAD.  I have been working with the ThingWorx application and helping clients get the most out of ThingWorx for approximately 7 years.   @Kaya:  Seven years—that’s a while! Do you have any “ThingWorx” stories from over the years you can share with your community peers?   @nmilleson:  Sure thing. I think the coolest thing that I’ve done with ThingWorx was create a custom SVG infographic that featured animations, click events, zoom-ins, and heatmaps based on temperature deltas.  It was a custom widget and it worked really well in ThingWorx.  When I first started learning to use ThingWorx, I took apart an old RC car and hooked up an Arduino to the motors and steering.  I was then able to control it using a ThingWorx mashup.  Pretty fun! I’ll be sure to share a visual so people can check it out.   Nick's awesome custom SVG infographic featuring a ton of neat functionality like zoom-ins & heatmaps. @Kaya:  That’s awesome! Sounds like a fun time indeed. I saw that one of your first publications about ThingWorx for EAC was from 2015 and titled “Updating ThingWorx Using an Arduino Uno and a Serial Connection.”  The ThingWorx platform has certainly evolved since then.  What would you say is your favorite thing about ThingWorx today?   @nmilleson:  It sure has evolved. I would say my favorite thing is that it’s flexible enough to allow you the freedom to design all sorts of applications, while also providing you with all these great tools that make it easy to use as well.   @Kaya:  Thanks for that. I can see that you have been a member of the PTC Community for five years.  Thank you for providing such great contributions.  What do you enjoy most about the PTC Community?   @nmilleson:  I enjoy this Community because everyone seems very willing to help each other out, regardless of the complexity of the issue.  I stick mostly with the IoT Developers section, but I’ll meander into the Manufacturing Apps and ThingWorx Ideas once in a while as well.   @Kaya:  Love to hear it. Now, so the PTC Community can learn a little more about you, how do you spend your time when you aren’t playing with ThingWorx or engaging on the PTC Community?   @nmilleson: Great question. I have been a professional piano player for almost 20 years, so I’m often at a piano bar making music when I’m not doing software development with EAC.   @Kaya: Awesome. Well those are all the questions I have for today. Thank you for sharing your experience with ThingWorx! Truly appreciate it.   @nmilleson: Of course. Happy to be a part of it!   Kaya, here. We love hearing from community members like @nmilleson about how ThingWorx creates value for them amongst a variety of use cases. If you’re active on the community and interested in being featured on the PTC Community Spotlight series, send me a direct message and we’ll get the ball rollin’.   For now, we’ll let Nick “play us” out. Until next time, stay connected!   -Kaya
View full tip
Recently I have been accompanying an integration partner and end customer around an issue experienced with ThingWorx resource exhaustion.  Early on it seemed like this was an issue with the ThingWorx Azure IoT Hub Connector as it would freeze up and become unresponsive.  Following a root cause analysis it became clear that it was actually caused by a lack of a number of standard cloud design patterns, which if used would have automatically adapted operation of the overall solution to be far more resilient as well as resource optimized.   The way that the logic was structured, it prioritized job execution on entities with the oldest last success time and would continue to retry these executions (IoT Direct Methods) every few seconds until successful.  There were a number of problems here, but I'll unpack a few in order to tie the problem to the solution via design patterns.   1) No exception handling When the direct method execution failed/timed out or the system reported being unable to execute the remote service, this response was not used to adapt the solutions behavior. 2) No backoff retry mechanism As exceptions were not caught, an adaptive retry mechanism with incremental or exponential backoff could not be leveraged to limit the impact of the build up of the failing retries. 3) No exception tracking Tracking that exceptions were occurring and counting them would allow powering an exponential backoff retry algorithm (with jitter), a Cancel or Circuit Breaker pattern (stop doing something which is just broken), as well as provided alerting to address specific areas of the distributed solution experiencing issues. 4) Conflicting priorities It was interesting to see the manifestation of the conflicting interests of wanting to ensure checks and balances (had all needed data) and system resiliency.  Retries and resource usage built up exponentially due to the transient error instead of backing them off.  Trying so hard to get the needed data from failing sensors meant that operational sensors were deprioritized and their data was not received either - spreading the localized issue to the whole system.   Around the time that I shared my recommendations and some examples of how to make the solution more resilient, one of my technical colleagues at Microsoft shared some extremely interesting and relevant design patterns documented by Microsoft as a part of the "Microsoft Azure Well-Architected Framework".  This framework with included Design Patterns for specific cloud application goals allows applying well-known industry standard approaches to dealing with the challenges of large scale distributed enterprise systems (reliability, performance, cost optimization).   She later then shared this blog post describing exactly the exponential backoff retry with jitter pattern which we had together recommended to the systems integrator.   What's interesting for us ThingWorx people is that this framework from Microsoft is about well-architected cloud solutions and does not specifically reference the Azure stack, and as such many of these approaches and design practices can be employed in your ThingWorx applications.  What are you waiting for?  Go check them out!
View full tip
Unlocking the Power of Industrial Data Presentation by Mike Jasperson, VP of the IoT Enterprise Deployment Center   his video presentation was performed at the Digital Transformations in Manufacturing conference of 2021, hosted by Enterprise Digital. In this presentation, Mike Jasperson goes over the benefits to modernizing and consolidating access to time-stamped data that is ingested from equipment and sensors into a central location like ThingWorx. Moving away from monolithic, legacy, and siloed systems, and towards more agile solutions, has never been more critical in order to increase machine, operational, and business efficiencies while also opening up visibility into data systems and infrastructure deployments.   This video partners with InfluxData to help customers extract value from IoT data systems, maximizing both performance and operational capabilities of their monitoring systems. To stay competitive in the IoT market, it's important to review the best practices for scaling and testing your industrial metrics solutions, as well as how to get the best performance out of your digital data solutions by using time-series optimized databases like InfluxDB. Open source technologies discussed here are a great way to create modular and upgradable solutions and accelerate IoT innovation.     (view in My Videos)
View full tip
Hi I have attached a Postman collection, this can be used as a template and be modified. steps to upload the collection to Postman. 1. In your Postman window click at Import. 2. Once you clicked import, you can chose your file. 3. The collection is now visible in your left side of the window.
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
Hi everyone,   We’re back! And we’ve got exciting news about Solution Central! As tempted as I am to share the news myself, I thought it only fitting to have Janie Pascoe, Product Manager of Solution Central, share the news with you. You may remember Janie from this post on ThingWorx's OPCUA functionality or this one on what it’s like to transition from a ThingWorx developer to a ThingWorx product manager. Janie, welcome back! The floor is yours.   Janie, PM of Solution Central: Thank you so much Kaija. I wanted to bring your readers up to speed on some of the latest and greatest in Solution Central. If you haven’t logged in to the Solution Central portal in a while, I highly recommend you do so because you will immediately be notified of what’s new in the application as you can see here due to our newly added what’s new popup blurb!     But in the spirit of giving you even more detail, let me tell you a bit more about what is new in Solution Central 3.0. This release is full of more intelligence than ever before! You can now not only deploy the solutions themselves from Solution Central but deploy all of a solution’s dependencies with the single click of a button. So, instead of having to deploy each dependency separately and in a specific order, Solution Central is now smart enough to understand the dependencies and deploy them for you. We have also added enhancements to our Solution Detail panel to make it even more intuitive and easy find what you’re looking for. And when it comes to clean up activities, we have you covered. Solution Central can now forget an instance when it’s no longer needed—no more questioning whether an instance is in active use or not.   Kaya: Thanks, Janie. Exciting stuff! Readers, you can learn all about these new features and more in our Release Notes and Help Center documentation. Be sure to try out the latest functionality!   Any questions, comments, or ideas for enhancements to Solution Central can be sent directly to jpascoe@ptc.com.   Stay on the lookout for our next release!   As always, stay connected, Kaya
View full tip
Interested in learning how others using and/or hosting ThingWorx solutions can comply with various regulatory and compliance frameworks?   Based on inquiries regarding the ability of customers to meet a wide range of obligations – ranging from SOC 2 to ISO 27001 to the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC) – the PTC's IoT Product Management and EDC teams have collaborated on a set of detailed articles explaining how to do so.   Please check out the ThingWorx Compliance Hub (support.ptc.com login required) for more information!
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
Check our expert session recorded library! The recordings will also be published in our Customer events library, posted on each event. Stay tunned!   Your feedback is very important to us! After watching the recordings, please take 2 min to complete this survey   Thingworx Foundation Session Name Link Duration Thingworx Mashup 101 - Do's and Don'ts Recording link 00:33:41 Thingworx Active Active Clustering (High Availability Recording link 00:26:24 Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts Recording link 00:27:02 Thingworx Flow Overview Recording link 00:43:40 Top 5 items to check for Thingworx Performance Troubleshooting Recording link 00:26:55 ThingWorx DEVOPS QuickStart Guide Recording link 00:45:05 ThingWorx Backup And Recovery Recording Link 00:20:14 Expert Session - Designing your Data Model in Thingworx Recording link 00:26:45 ThingWorx Installation Recording link 00:15:07 Expert Session - Introduction To Edge Connectivity Recording link 00:15:56 Expert Session - Basic Mashup Design in Thingworx Recording link 00:36:31 Expert Session - Extensions101 Recording Link 00:30:08 Expert Session – Developing your Data Model in Thingworx Recording link 00:39:19 Thingworx Scalability Recording link 00:09:18 Expert Sessions - ThingWorx Patch Upgrade Recording link 00:03:19   Thingworx Navigate Session Name Link Duration Understanding license requirements for Thingworx Navigate Recording link 00:32:40 Navigate SSL and Authentication Recording Link 00:34:30 Navigate 3D Viewer Recording Link 00:43:25 Component Based App Development Recording Link 00:24:07 Navigate 9.0 – What’s new Recording link 00:27:07 Overview of SSO Implementation for ThingWorx Navigate and Windchill with PingFederate Recording link 00:18:36 Identifying the right SSO mix for Navigate 1 6 Recording link 00:57:56 Navigate Configuration - PingFederate Automation Script Recording link 00:51:07 Expert Session - Navigate Configuration/Windchill Authentication Recording link 00:23:07 What’s new with Navigate 1.8 and the new Navigate 1.8 installer Recording link 01:05:26 Creating an I*E task for use in Navigate Recording link 00:05:36   Vuforia Expert Capture Session Name Link Duration VEC In a Nutshell Video Link 00:31:39
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
Scripto provides a RESTful endpoint for Groovy Custom Objects on the Axeda Platform.  Custom Objects exposed via Scripto can be accessed via a GET or a POST, and the script will have access to request parameters or body contents. Any Custom Object of the "Action" type will automatically be exposed via Scripto. The URL for a Scripto service is currently defined by the name of the Custom Object: GET: http://{{YourHostName}}/services/v1/rest/Scripto/execute/<customObjectName> Scripto enables the creation of "Domain Specific Services". This allows implementers to take the Axeda Domain Objects (Assets, Models, DataItems, Alarms) and expose them via a service that models the real-world domain directly (trucks, ATMs, MRI Machines, sensor readings). This is especially useful when creating a domain-specific UI, or when integrating with another application that will push or pull data. Authentication There are several ways to test your Scripto scripts, as well as several different authentication methods. The following authentication methods can be used: Request Parameter credentials: ?username=<yourUserName>&password=<yourPassword> Request Parameter sessionId (retrieved from the Auth service): ?sessionid=<sessionId> Basic Authentication (challenge): From a browser or CURL, simply browse to the URL to receive an HTTP Basic challenge. Request Parameters You can access the parameters to the Groovy script via two Objects, Call and Request. Request is actually just a sub-class of Call, so the values will always be the same regardless of which Object you use.  Although parameters may be accessed off of either object, Call is preferable when Chaining Custom Objects (TODO LINK) together.  Call also includes a reference to the logger which can be used to log debug messages. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id>&serial_number=mySerialNumber Accessing Parameters through the Request Object import com.axeda.drm.sdk.scripto.Request // Request.parameters is a map of strings def serial_number = Request.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing Parameters through the Call Object import com.axeda.drm.sdk.customobject.Call // Call.parameters is a map of strings def serial_number = Call.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing the POST Body through the Request Object The content from a POST request to Scripto is accessible as a string via the body field in the Request object.  Use Slurpers for XML or JSON to parse it into an object. POST:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> Response: { "serial_number":"mySerialNumber"} import com.axeda.drm.sdk.scripto.Request def body = Request.body def slurper = new JsonSlurper() def result = slurper.parseText(body) assert result.serial_number == "mySerialNumber"       Returning Plain Text Groovy custom objects must return some content.  The format of that content is flexible and can be returned as plain text, JSON, XML, or even binary files. The follow example simply returns plain text. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> // Outputs:  hello return ["Content-Type":"text/plain","Content":"hello"]       Returning JSON We use the JSONObject Class to format our Map-based content into a JSON structure. The eliminates the need for any concern around formatting, you just build up Maps of Maps and it will be properly formatted by the fromObject() utility method. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> import net.sf.json.JSONObject root = [   items:[    num_1: “one”,    num_2: “two”            ] ] /** Outputs {   "items": {  "num_1": "one", "num_2": "two"  } } **/ return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)]       Link to JSONObject documentation Returning XML To return XML, we use the MarkupBuilder to build the XML response. This allows us to create code that follows the format of the XML that is being generated. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> import groovy.xml.MarkupBuilder def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.root(){     items(){         num_1("one")         num_2("two")     } } /** Outputs <root>   <items>     <num_1>one</num_1>     <num_2>two</num_2>   </items> </root> **/ return ['Content-Type': 'text/xml', 'Content': writer.toString()]       Link to Groovy MarkupBuilder documentation Returning Binary Content To return binary content, you typically will use the fileStore API to upload a file that you can then download using Scripto.  See the fileInfo section to learn more. In this example we connect the InputStream which is associated with the getFileData() method directly to the output of the Scripto script. This will cause the bytes available in the stream to be directly forwarded to the client as the body of the response. GET:  http://{{Your Host Name}}/services/v1/rest/Scripto/execute/{{Your Script Name}}?sessionid={{Session Id}}&fileId=123 import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def contentType = parameters.type ?: 'image/jpg' return ['Content':fileInfoBridge.getFileData(parameters.fileId), 'Content-Type':contentType]   The Auth Service - Authentication via AJAX Groovy scripts are accessible to AJAX-powered HTML apps with Axeda instance credentials.  To obtain a session from an Axeda server, you should make a GET call to the Authentication service. The service is located at the following example URL: https://{{YourHostName}}/services/v1/rest/Auth/login This service accepts a valid username/password combination in the incoming Request parameters and returns a SessionID. The parameter names it expects to see are as follows: Property Name Description principal.username The username for the valid Axeda credential. password The password for the supplied credential. A sample request to the Auth Service: GET: https://{{YourHostName}}/services/v1/rest/Auth/login?principal.username=YOURUSER&password=YOURPASS Would yield this response (at this time the response is always in XML): <ns1:WSSessionInfo xsi:type="ns1:WSSessionInfo" xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <ns1:created>2013-08-12T13:19:37 +0000</ns1:created>   <ns1:expired>false</ns1:expired>   <ns1:sessionId>19c33190-dded-4655-b2c0-921528f7b873</ns1:sessionId> <ns1:sessionTimeout> 1800 </ns1:sessionTimeout> </ns1:WSSessionInfo>       The response fields are as follows: Field Name Description created The timestamp for the date the session was created expired A boolean indicating whether or not this session is expired (should be false) sessionId The ID of the session which you will use in subsequent requests sessionTimeout The time (in seconds) that this session will remain active for The Auth Service is frequently invoked from JavaScript as part of Custom Applications. The following code demonstrates this style of invocation. function authenticate(host, username, password) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var SERVICES_PATH = "/services/v1/rest/"             var url = host + SERVICES_PATH + "Auth/login?principal.username=" + username + "&password=" + password;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     getSessionId(self.xmlHttpReq.responseXML);                 }             }             self.xmlHttpReq.send() } function getSessionId(xml) {             var value             if (window.ActiveXObject) {                 // xml traversing with IE                 var objXML = new ActiveXObject("MSXML2.DOMDocument.6.0");                 objXML.async = false;                 var xmldoc = objXML.loadXML(xml);                 objXML.setProperty("SelectionNamespaces", "xmlns:ns1='http://type.v1.webservices.sl.axeda.com'");                 objXML.setProperty("SelectionLanguage","XPath");                 value =  objXML.selectSingleNode("//ns1:sessionId").childNodes[0].nodeValue;             } else {                 // xml traversing in non-IE browsers                 var node = xml.getElementsByTagNameNS("*", "sessionId")                 value = node[0].textContent             }             return value } authenticate ("http://mydomain.axeda.com", "myUsername", "myPassword")       Calling Scripto via AJAX Once you have obtained a session id through authentication via AJAX, you can use that session id in Scripto calls. The following is a utility function which is frequently used to wrap Scripto invocations from a UI. function callScripto(host, scriptName, sessionId, parameter) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var url = host + SERVICES_PATH + "Scripto/execute/" + scriptName + "?sessionid=" + sessionId;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     updatepage(div, self.xmlHttpReq.responseText);                 }             }             self.xmlHttpReq.send(parameter); } function updatepage(div, str) {             document.getElementById(div).innerHTML = str; } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo")       A more modern jQuery-based example might look like the following: function callScripto(host, scriptName, sessionId, parameter) {     var url = host + '/services/v1/rest/Scripto/execute/' + scriptName + '?sessionid=' + sessionId     if ( parameter != null ) url += '&' + parameter     $.ajax({url: url,               success:  function(response) {  updatepage(div, response); }           }); } function updatepage(div, str) {     $("#" + div).innerHTML = str } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo") In Conclusion As shown above, Scripto offers a number of ways to interact with the platform.  On each version of the Axeda Platform, all supported v1 and v2 APIs are available for Scripto to interact with the Axeda domain objects and implement business logic to solve real-world customer problems. Bibliography ​(PTC.net account required)     Axeda v2 API/Services Developer's Reference Version 6.8.3 August 2015     Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014     Documentation Map for Axeda® 6.8.2 January 2015
View full tip
Hi All,   We will host a live Expert Session: "Thignworx Active Active Clustering" on January 21th 8h00 EST. Please find below the description of the expert session and the registration link.   Expert Session: Thignworx Active Active Clustering Date and Time: January 21th 8h00 EST Duration: 1 hour Host: Ayush Tiwari - IoT Product Manager Registration Here: https://www.ptc.com/en/customer-success/expert-sessions-for-thingworx-foundation-webcasts (scroll down, the session is in the bottom of the page)   Description: This session will cover the main aspects of the High Availability Clustering feature for High Availability configuration launched with the ThingWorx 9.0 release. Join us and bring your questions with you!    Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9   Recording Link Thingworx Flow Overview Flow is a powerful component of the ThingWorx platform.  This session will take the Flow discussion beyond basic applications and into more customized and complex solutions.​ This will focus on use cases, main features such as triggers, connector options, main enhancements for Thingworx 9.0 and a short demonstration   Recoding Link
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
Announcements