cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
  After months of development, hours of user interviews and countless coffees, ThingWorx Solution Central is here!   I posted a few weeks ago introducing the new cloud solution management portal, but just as a reminder, ThingWorx Solution Central is a brand-new set of cloud services offered to help you more efficiently manage and deploy your solutions—ow ow!   ThingWorx Solution Central automatically identifies and packages up your dependencies so you can slash your time to deploy. Once you develop your solution in ThingWorx (using the Projects feature), ThingWorx Solution Central will then automatically package up all the artifacts and dependencies required for your solution to run, and then you can publish your “package” up to the cloud, where it will be ready to be deployed to your specified environment(s).     Ready to get started? Request access to ThingWorx Solution Central here and you’ll be on your way to easier solution deployments in no time.   And if you happen to run into any trouble, see the ThingWorx Solution Central Help Center.   Happy deploying!   Stay connected, Kaya
View full tip
    Raise your hand if you’re ready for seamless, rapid deployment of your ThingWorx applications with visibility into your various environments! It’s time to say goodbye to error-prone deployments with manual dependency tracking and hello to Solution Central!   Releasing this fall as part of 8.5, Solution Central is a brand-new cloud service coming to the ThingWorx platform to enable you to efficiently manage your ThingWorx applications across the enterprise.   With Solution Central, you’ll no longer be caught chasing missing dependencies (like ThingShapes, Mashups, templates or library extensions). Solution Central automatically identifies and packages up the dependencies required for your application. No more manual dependency madness!   Whether you’re managing many apps deployed to a few environments or a single app deployed to hundreds of environments, Solution Central allows you to accelerate your deployment through an intuitive UI or powerful APIs for automation.   Here’s how it works: Begin by creating your application in Composer with a project. Let Solution Central automatically package up all the artifacts and dependencies required for your application. Allow Solution Central to publish your solution package to the cloud. Deploy your application to your various environments (local servers, data centers, cloud systems) directly from Solution Central. It’s like your company has its own private app store. Here’s a sneak peek of the Solution Central UI! Keep an eye out for the release of ThingWorx 8.5 at the end of Sept 2019 and begin accelerating your app deployment! Check out the presentation and demo my fellow PM Chris Baldwin and I delivered at LiveWorx19—and be sure to attend LiveWorx20! To navigate to our session recording, search for “Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise” here.   Sound interesting? Message me directly to discover how you can become part of the Solution Central Private Preview Program!   -Kaya  
View full tip
ThingWorx offers Docker based installations utilizing existing PostgreSQL databases. In newer releases ThingWorx Docker installers also offer using other databases.   Personally I'm using a certain method of deployment where I can just easily exchange some files, create new images and have a H2 based environment running for some quick tests.   As H2 is a built-in database, I will not dive into setting up the platform-settings.json for other connectivity. However other databases can be connected to by adjusting the platform-settings.json. This might also require an internal Docker Network structure which I will not elaborate on here.   Note: the following procedure is not fully supported as it's not using the deployment methods provided by the installers!   Create the Directory Structure   My Directory structure looks the following (expanded for the 8.2.x branch):   /home/ts/docker/ twx.8.0.x.h2 twx.8.1.x.h2 twx.8.2.x.h2 Dockerfile settings platform-settings.json <license_file> storage Thingworx.war twx.8.3.x.h2   I have a directory for every version I want to test with.   In each directory there's the Dockerfile - the recipe file I'm using. There's also the version specific Thingworx.war file as well as two directories: settings and storage which I will map to the ThingWorx directories inside the image later.   The Recipe File   FROM tomcat:latest MAINTAINER me@somewhere.com LABEL version = "8.2.0" LABEL database = "H2"  RUN mkdir -p /ThingworxPlatform RUN mkdir -p /ThingworxStorage RUN mkdir -p /ThingworxBackupStorage ENV LANG=C.UTF-8 ENV JAVA_OPTS="-server -d64 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Duser.timezone=GMT -XX:+UseNUMA -XX:+UseG1GC -Djava.library.path=/usr/local/tomcat/webapps/Thingworx/WEB-INF/extensions COPY Thingworx.war /usr/local/tomcat/webapps VOLUME ["/ThingworxPlatform", "/ThingworxStorage"] EXPOSE 8080   I change the version label to keep track of the versions for each recipe.   Deploying   Build the Docker Image by navigating to the directory where the recipe file is based in   sudo docker build -t twx.8.2.x.h2 .   Create a Docker Container and start it   sudo docker run -d --name=twx.8.2.x.h2 -p 82:8080 -v /home/ts/docker/twx.8.2.x.h2/storage:/ThingworxStorage -v /home/ts/docker/twx.8.2.x.h2/settings:/ThingworxPlatform twx.8.2.x.h2   I change the name of the Image and the Container as well as the external port to distinguish all the different versions. The -v option maps the paths in my Operating System to the paths in the Docker Container, so I can browse the ThingworxStorage and ThingworxPlatform folder without connecting inside the Container. That's quite handy to check the logs, or place the license file.   Starting and Stopping   I can fire up and shut down Containers I need with the following commands:   sudo docker start twx.8.2.x.h2 sudo docker stop twx.8.2.x.h2   What next   That's just my basic setup. Usually I copy & paste a working directory for deploying another version and adjust what needs to be changed. You could use this as a basis for quick and easy deployment where even additional features could be added, i.e. HTTPS configuration or auto-deploying certain ThingWorx Extensions via a REST API call.   To ensure starting with a clean Image, for building new Images I delete the contents of the storage folder and only leave the platform-settings.json in the settings folder (I copy the license later after generating it with my new Device ID).
View full tip
ThingWorx 8.2 System Requirements ThingWorx 8.2 Helpcenter The following feature enhancements and bug fixes exist in ThingWorx 8.2.0: Due to security updates, a minimum version of Apache Tomcat 8.0.47 or 8.5.23 should be used with ThingWorx. Enhancements Platform • Included information about opting out of metrics reporting. For more information, see the ThingWorx Metrics Reporting Services Configuration section in the Platform Subsystem help topic. • The Script Log Error has been added to improve error logging for scripts. • Added support to allow mashups to be rendered using jQuery 3.x runtime. • Query service optimization. This includes improved performance for the QueryPropertyHistory and QueryPropertyNamedHistory services. Previously, a database call was made for every logged property. With this improvement, one database call is made for all logged properties, resulting in the following improvements: ▪ A ~20% decrease in memory usage for the QueryPropertyHistory and QueryNamedPropertyHistory service queries if no filters are applied (PostgreSQL and MSSQL). ▪ Decreased time to execute query (~10%) for the QueryPropertyHistory and QueryNamedPropertyHistory services. Depends on latency to the database (PostgreSQL and MSSQL). ▪ Additional decrease in memory, based on the filter supplied during the query for QueryPropertyHistory and QueryNamedPropertyHistory services. (PostgreSQL and MSSQL). If a filter is applied that reduces the record count by 50%, then there is an additional 50% decrease in memory usage on top of the other 50% described in the first point. This optimization also results in an approximate 10% decrease in memory for single property queries. The Audit Subsystem has been added. It supports the following capabilities: • Automatically add audit entries to online storage. • Search for audit entries (use the QueryAuditHistory service) stored online. • Archive online audit entries to offline storage (automatically performed daily by default). • Export audit data, using the language selected for the export. • Purge online audit data on the basis of a specified number of days for audit data to remain online and also on the specified number of rows to keep online. • Clean up archived audit data automatically, based on a configured schedule. • The security of the PASSWORD base type has been enhanced and is now encrypted. See Passwords for more information. • Added the Collection Widget, which allows you to replicate/repeat mashups and content by using infotables to dynamically supply visual content and data. Refer to the KCS article for additional information here • Additional capability has been added to New Composer. For more information, refer to the ThingWorx Community blog. • The licensing process has been improved. An activation ID is no longer required to obtain a license and a new license file is not required for minor or major release upgrades. ◦ For connected scenarios, activation IDs are no longer required in the platform-settings.json file. ◦ For disconnected scenarios, go to the enhanced PTC Support site pages, select the product, enter a Device ID, and retrieve a license. • You can enable the Application Key Authenticator when SSO is enabled by editing the sso-settings.json configuration. For more information, see Configure the sso-settings.json File. • The CSS Editor was added to Mashup Builder, which allows developers to create modern experiences with responsiveness, animations, and advanced styling and behaviors. Refer to the KCS article for additional information here. • Added support for "Store and Forward" functionality to the interface between KEPServerEX and the ThingWorx platform. KepServerEX can be configured to store updated property data to disk when disconnected from the ThingWorx platform and will send that data gracefully when the connection is re-established. • In mashups, row and column gadget sizes 1 to 8 are now available. TW-25477 Bug Fixes Platform Related JIRA • Fixed an issue with Thing Shapes when editing subscriptions twice before canceling or closing in which the second edit was not saved. TW-28718 • Fixed an issue that was causing SQL Server apparent deadlock exceptions. TW-28208 • Added useful log information for troubleshooting LDAP and Active Directory errors. TW-23873 • Fixed an issue with exception handling in DSLProcessor in which line numbers were not included in the log. TW-18042 TW-17255 • Fixed an issue in which opening/closing brackets are not highlighted if there were 100 or more lines of code in a JavaScript service. TW-12740 Mashup Builder • Service error notification messages were fixed to display on multiple lines based on line breaks in the message. TW-24738 • Fixed an issue in which a master mashup header image was not fully displayed. PSPT-3365 Extensions Related JIRA • The Google Maps JavaScript API was updated to prevent the use of the library without an api key. If you are using the Google Map extension in your application, verify that the extension's metadata.xml file is updated with the correct URL (https://maps.google.com/maps/api/js?sensor=false&key=YOUR_API_KEY). Re-zip the extension and reimport into ThingWorx after making this change.
View full tip
Note: The following tutorial are based on a Thingworx/CWC 9.5. Steps and names may differ in another version. Context As a human reaction, the tracked time displayed may be misperceived by the Operator. It can lead to a reject of the solution. CWC doesn’t have (yet?) the capability to configure the visibility to hide the timer. The purpose of this tutorial is to create a quick and straight to the point customization to hide the timer in the execution screen. All other features, services and interfaces are left untouched.   As a big picture, here are the 6 modifications you will need to do: Modify the 4 mashups Modify 2 values in 2 tables of the MSSQL database   Status The mashup containing the timer is PTC.FSU.CWC.Execution.Overview_MU. It is easy to duplicate it and hide the timer widget (switch the visible property to false). But now, how to set it in the standard interface? In order to do it, you need to duplicate the mashups linked to the Execution.Overview_MU mashup. PTC.FSU.CWC.Execution.Overview_MU is directly referenced by the following entities: PTC.FSU.CWC.Authoring.Preview_MU PTC.FSU.CWC.Execution.WorkInstructionStart_MU PTC.FSU.CWC.GIobalUI.ApplicationSpecificHeader_HD PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP   Customization Duplicate all those mashups except Authoring.Preview_MU because we will focus only on the authoring side of CWC. Hereafter it will be called the same as the original + _DUPLICATE. Perform the following modifications.   Open PTC.FSU.CWC.GlobalUI.ApplicationSpecificHeader_HD_DUPLICATE, then in Functions: open the expression named NavigateToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE open the validator named ShowRaiseHand. Change the name of the 2 mashups to the relevant ones, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE and PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the validator named ShowStationSelection. Change the name of the 2 mashups to the relevant ones, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE   Open PTC.FSU.CWC.Execution.Overview_DUPLICATE, then in Functions: open the expression named SetMashupToWorkInstructionStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE open the expression named SetMashupToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE   Open PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE, then in Functions: open the expression named NavigateToStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the expression named NavigateToStationSelection. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE   Open PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP_DUPLICATE, in functions: open the expression named NavigateToStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.Overview_MU_DUPLICATE open the expression named NavigateToWorkInstructionStart. Change the name of the mashup to the relevant one, example: PTC.FSU.CWC.Execution.WorkInstructionStart_MU_DUPLICATE   Now, let’s change the database value. In MSSQL, navigate to the thingworxapps database, and edit the dbo.menu table. Look the line for AssemblyExecution (by default line 22) and look the value in column targetmashuplink. Switch the original value PTC.FSU.CWC.WorkDefinitionExecution.StationSelectionContainer_EP to the name of the duplication of this mashup. Lastly, edit the dbo.menucontext table. Look the line related to CWC (application UID = 5) and look the value in column targetmashuplink. Switch the original value PTC.FSU.CWC.GlobalUI.ApplicationSpecificHeader_HD to the name of the duplication of this mashup.   Result After this modification, you can start and check an operation. You should see the following result:
View full tip
An introduction to installing the ThingWorx platform. Information on the environment, prerequisites, and configuration steps when installing ThingWorx. Includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video is the 3 rd part of a series of 3 videos walking you through how to setup ThingWatcher for Anomaly Detection. In this video we will use Anomaly Mashup to visualize data received from my remote device.   Updated Link for access to this video:  Anomaly Detection 8.0:  Viewing Data via Anomaly Mashup:  Part 3 of 3
View full tip
Announcing the Final Installment   JMeter for ThingWorx, the Comprehensive Guide and Best Practice Tips This is the final post on using JMeter for ThingWorx. Below there are best practice tips for using JMeter and for load testing in general. Attached to this post is a comprehensive guide including all of the information from every post we've made on JMeter, including the tutorials. For a more central source, feel free to download the guide , or see the past posts here: JMeter for ThingWorx (original post) Building More Complex Tests in JMeter Distributed Testing with JMeter Generating and Reviewing JMeter Results   JMeter Best Practice Tips Use Distributed Testing As already mentioned in a previous post, each JMeter client can only handle about 150-250 threads depending on the complexity of the tests, and each client will need around 1 CPU and up to 8 GB of RAM for the Java heap. Some test plans will run with fewer host resources, so resizing the test client VM up or down is often required during test development. Create a batch or shell script to start the multiple JMeter clients for greater ease of use. Use Non-Graphical Mode Non-graphical mode allows the system to scale up higher; client processing uses up resources just to keep the simulation running, but with graphical mode turned off, there is less of an impact on the response times and other results. Graphical mode is essentially only used for debugging. Turn off Embedded Resources This setting reloads all of the typically cached requests over and over; there will be far more download requests, and to the exclusion of other requests, than is helpful. Ensure this box is not checked, especially in the HTTP Requests Defaults element:   Browser caching means that this setting doesn’t actually simulate a proper user load, given that many of the reloaded resources would not be reloaded by actual users. Use this incrementally, for one or two HTTP requests only, if there is a reason why those requests might need to download fresh images, scripts, or other resources with each call; for instance, simulate page timeouts using this once per hour or something similar. Using this across the whole project will prevent it from scaling well, while not actually simulating real-world conditions. Avoid Using Listeners For instance, the “View Results Tree”, which uses additional resources that may impact the results in disingenuous ways, based around the needs of the clients themselves and not the actual response times of the server. Many listeners are only for debugging a handful of threads while designing the tests. A list of recommended listeners for different purposes is in JMeter documentation. Summary Report is the only one you want enabled, as that exports the results as a csv or similarly formatted file, which can then be used to build reports. JMeter CAN handle SSO JMeter can authenticate into and test an SSO-enabled system. Sometimes the SSO configuration is essential for customers, and they may be quick to assume therefore that they cannot use JMeter, but that's not entirely true. Some external tools that might help with this are BlazeMeter (mentioned again in just a moment) and Fiddler, a good tool for decoding what data a particular SSO setup is exchanging during the authentication process. Use Logic Controllers for Parametrization Parametrization is critical to mirroring a proper user load, and allowing different data sets to be queried or created; the load should seem organic, random in the right ways, with actions occurring at random times, not predictable times, to prevent seeing artificial peaks of usage that don’t represent real usage of the Foundation server. Random order controllers direct the threads down different paths based on random dice rolls, allowing for a randomized collection of user activity each time, not something that has to be regenerated like a set of Boolean values that is specified in an input CSV and used to navigate a series of true or false switches. Switches just look for an environment variable to be either 1 or 0, and when it hits a switch that’s a 1, it triggers the switch below, running them in the order given under the transaction controller that goes with the switch. In this image, the 1’s and 0’s are given in the CSV input file; randomizing that input file therefore randomizes the execution of the switches too:   Use Commercial Add-Ons There are many external, add-on tools and plugins which enhance JMeter’s capabilities. One external tool that can enhance JMeter’s capabilities is Blazemeter, which has some free and some paid options to help create better reports, removing automatically much of the “garbage” REST calls (which would otherwise need to be manually deleted), and provide more consumable test reports right out of the box. Other tools and plugins include: Maven Netbeans SonarQube Jenkins Autometer Gradle Amazon EC2 Lightning IntelliJ IDEA Cassandra Grafana For more best practice information, see the JMeter Best Practice Manual.   General Load Testing Guidelines Concurrency Requirements – How to Properly Estimate the Size of the Load Test Take a brand new ThingWorx-based app. How people will be accessing the system and how often? How many are business users? How many are engineers? What do they do? Many assume that every named user in the corporate LDAP will need to access to the server, often 10s of thousands of users; this generally drastically oversizes the system. Load testing for many thousands of users is very hard and requires a lot of set-up, tuning, and optimization to get right; so if it seems that thousands of users are expected, then validating this claim is important: most customers don’t really have that many concurrent users in an engineering system. Use estimates based on how many people work at which offices, which time zones those people are in, and what kinds of users they are. Do they need access to engineering data? Perhaps there are simpler mashups for them that uses less resources. One tool for these sorts of estimations that PTC offers is the office time zone overlap Windchill Sizing Calculator (shown here) Other ways to estimate include: Analyzing the business processes, things like how long workloads typically take to complete and how many workloads are generated per day, converted into hour, minute, or second as desired for the peak duration, the length of the test. “Day in the Life” modelling, or considering things like “what does user X do in a day?” Maybe, user X checks out some drawings, edits them, and then checks them back in at 4:30. Maybe user Y actually digs into the underlying parts and assemblies, putting in change requests or orders throughout the day, instead of waiting for the end. Models are made based around the types of users. Also consider: What are worst case scenarios? What are the longest running activities? What produces the largest data transfers? What activities have large, heavy data base queries? When is the peak overlap of usage? Beginning and end of day downloads and check ins? Reports that are generated regularly? How do these impact the foreground users? For a simpler estimate, start with a percentage of the named user count, anywhere from 5-15% is a good ballpark percentage. Don’t overestimate to feel like the application has been financially worth it; even if everyone is logged in and using it all at once, which is unlikely, load testing for every single user doesn’t take into account the fact that people pause in between clicking on things to think, type emails, get coffee, and so forth. Fewer people than expected are actually doing concurrent activities like loading web pages and updating data streams. Whenever possible, use concurrency data from existing customer systems to guide the estimate for the new system. Legacy system are great places to start.   Use Grafana to monitor the system side throughout the load test, which is also required to know the test has been successful; also set up Grafana to monitor the application once it goes live, to both prevent and mitigate more rapidly any technical issues with the server. Also remember that PTC Technical Support is here to help! Provide thread dumps with an open case to any TSE, and they will help troubleshoot the tests and review any errors in the ThingWorx or Tomcat logs.    
View full tip
Remote Monitoring of Assets in Connected Factories   As stated in the previous reference benchmark, one of the missions of the IOT Enterprise Deployment Center (EDC) is to showcase how real-world IOT business problems are solved. Our goal is that these benchmarks can be used as a reference or baseline for architects working on their own implementations, showing not only a successful at-scale implementation, but also what happens when that same implementation is pushed to, or even past, it's limits.   The second in this series is attached here, this time reflecting a Connected Factory implementation. ThingWorx was deployed alongside Kepware Server, with the numbers of things, the number of properties, and the write rate for those properties being varied to once again test the capabilities of a remote monitoring use case, but this time in a Connected Factory setting. The business logic was kept simple to ensure it was not the limiting factor, as the throughput between Kepware Server and ThingWorx was pushed to the limit. See first hand the capabilities of Kepware Server and ThingWorx Foundation to handle implementations centered around real-time data reporting   More Connected Factory implementations will be added to this document in time, with multiple Kepware Server deployments and other scenarios to come. Please feel free to use this community post to ask any questions about our approach and discuss any design, deployment, and simulation factors. 
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
Learn how to use the DBConnection building block to create your own DB tables in ThingWorx.
View full tip
Original Post Date:            June 6, 2016   Description: This is a video tutorial on creating a Stream, adding a Data Shape with properties, and writing values to the Stream.    
View full tip
In this video you would see how to start to use your already created Virtual Image of ThingWorx Analytics using Oracle Virtual Box. This Video is Part-1 of the Series Getting Started with ThingWorx Analytics.   Updated Link for access to this video:  Getting Started with ThingWorx Analytics: Part 1 of 2
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
  Sunshine, beach chairs & ThingWorx 9.2. What more could you need for your summer essentials?   Targeted for June 2021, our next release features intelligent one-click deploy with Solution Central*, new web components, and an enhanced IAM integration!   Let’s dive deeper into each.   Deploy an entire solution in one click with Solution Central’s intelligent one-click deploy. Good news: you followed a modular design pattern and broke up your application into smaller libraries and components. You can now enjoy easier maintenance and re-use of your app. Bad news: your app now has 10 different dependencies, with differing versions, each with a required order to import into ThingWorx. Now, try to share these modules with colleagues, or use them on environments where code may already exist. Not exactly a day at the beach, right? Fear not, one-click deploy has you covered. You click the button, we spin through and find the right dependencies, the right versions, the right order and load them all into the target platform upon a deploy request. Solution Central  one-click deploy means more sun and sand for you! Check out this post to learn more about what’s available in Solution Central 3.0! Intelligent One-Click Deploy with Solution Central Enhance your solutions with our latest web components! Imagine this: you’re a systems developer at a large parts manufacturer and your boss has asked for a detailed analysis of downtime over the last six months. Not to worry! ThingWorx 9.2 features a new waterfall chart that can be leveraged to understand dynamics in defect counts, loss reasons, time bottlenecks and other conditions. Be sure to try it out! And, while you’re at it, try out our new web components that are available now as preview: a toolbar to add key like filtering at the top of your screen or data intensive widgets (e.g., grids), a more flexible grid and a fancy new paradigm for interface developers. These three preview widgets are fully functional and tested in 9.2. Preview widgets will graduate in a future release when we add all planned functionality or address any perceived usability feedback.​ Don’t be afraid—it just means more good things are coming. Surf’s up, you can use these widgets safely now!​ New Waterfall Widget Coming in ThingWorx 9.2 Leverage new integrations with Azure Active Directory for more seamless user management. In prior releases we have offered integration to Azure Active Directory and SSO through Central Authorization Service type products or through custom authenticator extensions to ThingWorx.  With our new Azure AD integration, you can cut the custom extensions and additional software out of the picture.  We now accept direct SAML assertions from Azure AD directly to ThingWorx platform, which makes it that much easier to deploy your app in your organization’s SSO flow.  It’s as smooth as that frosty tropical drink when the sun goes down.   Like what you see? Want to try it out for yourself? ThingWorx 9.2 is targeted for June 2021, so be sure to keep a lookout on the horizon. Bump, set, spike!   Stay cool & connected, Kaya
View full tip
Remember that when you are calling an external URL to fetch data via an API call to another system that you must encode special characters specifically. For example the URL that you may type into a browser to test may look like this: https://someserver.somwhere.com:443/apicall?parameter1=test string&parameter2=test^number but when scripting that into a string variable you'll need to replace the space and the carrot with the proper encoded values (%20 and %5E) var params = { username : "me", password : "password", url : "https://someserver.somwhere.com:443/apicall?parameter1=test%20string&parameter2=test%5Enumber", ignoreSSLErrors : false, timeout : 60, headers : headers }; var result = Resources['ContentLoaderFunctions'].LoadXML(params); also note that in this instance we're making a secure connection therefore port 443 (typically the default) was explicitly specified...
View full tip
There are many choices in life and ThingWorx offers some persistence provider options as well. As of ThingWorx release 8.2, five Database options are provided. 1 PostgreSQL  9.4.5 minimum 2 DataStax Enterprise Edition 4.6.3,5 3 SAP HANA  SPS 11, 12 4 Microsoft SQL Server 2014 and later 5 H2 (version info is not available, maybe because it's an embedded?) H2 is for small scale, mainly for testing purpose, PostgreSQL and Microsoft SQL Server are for middle scale and finally DataStax Enterprise Edition is for big scale. I don't have enough information about SAP HANA so would like to leave it untouched in my comment... I don't have a number as to how many customers are using which database but my gut feeling tells me that PostgreSQL is a popular option, especially cost-wise. PostgreSQL offers powerful tools, such as logging and utilities, to troubleshoot issues.   In this post I would like to cover some useful information you can retrieve by using pgstattuple and pgstatindex of contrib module. By default, PostgreSQL takes a good care of fragmentation and reindex by itself. But in some cases, there's a situation that you want to review status of the database to narrow down the cause of your troubleshooting issue. There are many ways to achieve it but contrib module is provided to review stats of tables and indexes. As explained in this article, it is recommended to keep the number of records in value_stream and stream less than 100,000. That means you'll insert and delete many records when running ThingWorx. What happens then? If you delete(/update) a record in a table, PostgreSQL keeps the previous record in a page but mark it as deleted(and inserts a new record when it's update operation) If the number of those logically deleted records increases, PostgreSQL needs to access many pages of the table to obtain records which meets the criteria user might experience slow performance because of this Those logically deleted records will be ultimately removed from pages when vacuum is run   If you have installed contrib module and enabled it, you can review stats of tables by command below; select * from pgstattuple('stream');                             //This returns the stats of stream table select * from pgstatindex('stream_id_time_index');    //This returns the stats of an index on stream table   pgstattuple returns information below (I modified the format to make it more readable in this post) and meaning of each items are explained in the document .   table_len tuple_count tuple_len tuple_percent dead_tuple_count dead_tuple_len dead_tuple_percent free_space free_percent  8192 1 33 0.4 3  97 1.18 8004 97.71   Before obtaining the stat, I Inserted 4 records and Deleted 3 records and therefore it shows that tuple_count (the active record is 1) and dead_tuple_count (the logically deleted records are 3) and dead_tuple_percent is 1.18. If dead_tuple_percent is high, that means the table is not vacuumed or many DML were executed after the last vacuum operation and this could be the cause of the slow ststem performance.   * IMPORTANT: pgstattuple, pgstatindex consumes resources so it's recommended to run them during the maintenance window.   Takaaki
View full tip
Original Post Date:            September 30, 2016   Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform (7.2) in a RHEL environment.  All required software components will be covered in this video.    
View full tip
Underneath video walks through how to Publish a Model from Analytics Builder into Analytics Manager using the connector named TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
View full tip
This zip file contains my slides and buildable project from my Liveworx 207 developer chat presentation featuring the Thing-e Robot and raspberry pi zero integration with ThingWorx using the C SDK.
View full tip
Announcements