cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Connection server + InfluxDB + Grafana work together can display connection server metrics in graphic charts Download and set up connection server Download InfluxDB and configure InfluxDB edit <influxdb home>\influxdb.conf file Note: [admin] is InfluxDB admin login page, [http] is Grafana connected settings, [graphite] is connection server connected settings Run influxd.exe file with below commands (windows cmd) <path>/influxd -config <path>/influxdb.conf Run influx.exe file with below commands (in my case the port is 8008) <path>/influx -port 8008 Start connection server Login to InfluxDB by using URL http://localhost:8018. Choose Database graphite and Query SHOW MEASUREMENTS Note: There should be many connection server metrics Double click grafana-server.exe file (<path>/grafana-4.2.0.windows-x64/grafana-4.2.0/bin) Login to Grafana with URL http://localhost:3000 The default login username and password is admin/admin Create a new Data Source Note: Type, Url and Database properties are required. Save & Test. If the connection is set up successfully, there should be a green message pop-up. Create a new dashboard Add a new Row, and set up its metrics by create new querys Sample result:
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
Key Functional Highlights Production Advisor is now available in the Freemium and Developer Kit downloads. Plant Managers are provided with real-time monitoring of production status and critical KPI’s such as utilization, performance, quality and OEE, by unifying data from disparate lines, assets and sensors. With Production Advisor, Plant Managers have the ability to detect and react instantly to production issues- reaching lower downtime, higher production throughput and better quality from the factory resources. Compatibility ThingWorx 8.0.1 KEPServerEX 6.2 KEPServerEX V6,1 and older as well as different OPC Servers (with Kepware OPC aggregator) Documentation ThingWorx Manufacturing Apps Setup and Configuration Guide: https://support.ptc.com/WCMS/files/173133/en/ThingWorxManufacturingAppsSetup_8-0-1.pdf ThingWorx Manufacturing Apps Customization Guide: https://support.ptc.com/WCMS/files/173135/en/ThingWorxManufacturingAppsCust_8-0-1.pdf Get Started Documentation on Portal: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard/Get-Started (PTC users should use their normal login credentials and do not need to register on the portal) Download Freemium and Developer Kit (8.0.1) are available for download here: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard (PTC users should use their normal login credentials and do not need to register on the portal ThingWorx Platform Extensions (8.1.0, released 1 Nov 2017) are available for download here: https://support.ptc.com/appserver/auth/it/esd/product.jsp?prodFamily=TWA
View full tip
Video Author:                     Stefan Taka Original Post Date:            June 6, 2016   Description: In part 1 of the ThingWorx REST API tutorial, we will introduce you to the ThingWorx API structure, and also demonstrate how the API can be invoked.    Blog post with text examples found here:  REST API Overview and Examples    
View full tip
The following Expert Session videos are now available for viewing within the ThingWorx Community: ThingWorx Analytics Installation - This Expert Session will walk you through the complete installation of ThingWorx Analytics from the Prerequisites to Confirming the Installation is successful and all steps in between. The first half of the video gives a breakdown of the components and the process of the installation with the second half being an actual Demo of the Installation.     ThingWorx Analytics API Overview - This Expert Session is designed to help beginners get up and running with ThingWorx Analytics. It covers basic concepts like: What are APIs, how to configure the metadata file, and a live Demo that shows you how to interact and use ThingWorx Analytics in real time. This Expert Session would also be useful for experienced users who need a refresher course.   Decision Tree, ThingWorx Analytics Builder - This Expert Session reviews the concept of “Decision Trees” and the functionality that is available in ThingWorx Analytics Builder. First, you will learn how to create and upload a dataset in ThingWorx Analytics Builder.  After that, it shows you how to train a model and score on the model that was just generated. It then goes into detail on how the prediction learner "Decision Tree" operates and classifies inputs.   Use Case Identification - This Expert Session goes over ways to identify and develop a successful use case for ThingWorx Analytics. The example use case presented here is on employee retention in a fictional company with the goal of maximizing employee retention . This presentation will provide you with all the fundamentals you need to develop your own ThingWorx Analytics use cases from the ground up.   ThingWorx Analytics Signals - This Expert Session will provide you with an in depth explanation behind how Signals are calculated in ThingWorx Analytics, what purpose they serve, and why we use them.  Some basic mathematical concepts are discussed so viewers will have a better idea of how ThingWorx Analytics operates behind the scenes.   Related Links For more information, you can visit a new space dedicated to these helpful technical videos.   Additional Expert Sessions will be highlighted here in the ThingWorx Community every few weeks. Visit the Online Success Guide to access additional information about ThingWorx training and services.
View full tip
Mapping previous versions of ThingWorx Analytics API to ThingWorx Analytics 8.1 Services Since ThingWorx Analytics 8.1, the classic server monolith has been replaced by a series of independent microservices. This new structure groups services around specific elements of functionality (data, training, results). Thus the use of the previous API commands to access ThingWorx Analytics functions has been replaced by the use of ThingWorx Services. Those Services exist within specific Microservice Things accessible in the ThingWorx Platform 8.1. The table below shows a mapping of the most common previous API commands from version 8.0 and previous versions to the version 8.1 related services. The table below does not contain an exhaustive listing either of API commands nor of Services. The API commands used below are samples which might require further information like headers and Body once used. These are used in the table below for reference purposes. Previous API Command Purpose Sample Syntax TWA 8.1 Service Analytics Thing related to Service Service description 1 Version Info GET: http://<IP Address>:8080/1.0/about/versioninfo VersionInfo This service is available in each Mircorservice Thing inheriting from Analytics Server Returns the internal version number for a specific microservice. The first two digits = ThingWorx Core version. The next three digits = version of the microservice. 2 Registering new Dataset POST: http://<IP Address>:8080/1.0/datasets/ CreateDataset Data Microservice Creates the dataset uploads the data along with its metadata and optimizes it automatically. 3 Checking Dataset Status GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name> ListCreatedDatasets Data Microservice This old functionality is replaced by a Service that lists all the created Datasets 4 Creating Metadata POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration CreateDataset Data Microservice (Check line 2 for further information) 5 Checking Dataset Configuration GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration GetDatasetSchema Data Microservice Retrieves the metadata from a dataset. 6 Loading Dataset CSV POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/data CreateDataset Data Microservice (Check line 2 for further information) 7 Checking Job Status GET: http://<IP Address>:8080/1.0/status/<Job ID> GetJobStatus Available in all created Microservices inheriting from AnalyticsJob Server Retrieves the status of a specific job 8 Signals Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals CreateJob Signals Microservice Create a job to identify signals 9 Signal Results Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals/<Job ID>/results RetrieveResult Signals Microservice Retrieve a result of a Signals job 10 Profile Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles CreateJob Profiling Microservice Creates a job to generate profiles. 11 Profile Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles/<Job ID>/results RetrieveResult Profiling Micorservice Retrieve the results of a profiles job. 12 Train Model Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction CreateJob Training Micorservice Create a prediction model job. 13 Train Model Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction/<Job ID>/results RetrieveModel Training Microservice Only retrieves the PMML model. But if a holdout for validation was specified in the CreateJob, a validation job is auto-created and runs. 14 Scoring Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores BatchScore Prediction Microservice Submit Predictive Scoring Job 15 Scoring Job Result GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores/<Job ID>/results RetrieveResult Prediction Microservice Retrieve results from prediction scoring jobs
View full tip
  Use the ThingWorx Azure IoT Hub Connector with simulated appliances.   Guide Concept   This project will introduce how to integrate ThingWorx with Azure IoT Hub. The combination of these two platforms extends the ThingWorx utilities to Azure IoT Hub edge devices and allows integration with Azure Blob Storage accounts.   Following the steps in this guide, you will install the ThingWorx Azure IoT Hub Connector and run a simulated Azure device.   We will teach you how to build powerful and scalable IoT applications by integrating ThingWorx and Azure IoT Hub.   You'll learn how to   Install, configure, and run the ThingWorx Azure IoT Hub Connector Import devices that exist in Azure into ThingWorx Connect a simulated Azure device to ThingWorx Foundation server   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 60 minutes       Step 1: Overview Diagram   The ThingWorx Azure IoT Hub Connector maintains a network connection to both an Azure IoT and a ThingWorx Foundation instance:        The ThingWorx Azure IoT Hub Connector enables remote devices that connect to the Azure IoT Hub, to connect to the ThingWorx Foundation server. The Azure IoT Connector handles message routing for the devices that communicate through the Azure IoT system. It also handles message routing from the ThingWorx Foundation server to devices via the Azure IoT Hub.   The ThingWorx Azure IoT Connector is a separate, stand-alone application that must be run on a server that can connect to both the ThingWorx server and the Azure IoT service.     Step 2: Configure Azure IoT Hub   In order to use the ThingWorx Azure IoT Connector, you must first configure an IoT Hub and a storage account in Microsoft Azure. You can provision a free tier account for these resources. In this step, we will create the Azure resources and gather the configuration information that enable you to connect to ThingWorx with the required credentials.   Log into Azure Portal   If you do not already have an Azure account you can create a free account that will work with this guide.   Create Azure IoT Hub   Follow the Microsoft documentation to create an Azure IoT Hub, accepting any defaults.   TIP: The name your IoT Hub must be globally unique and include only lowercase letters and numbers.   1. Create message routes to direct DeviceLifecycleEvents and TwinChangeEvents events to the built-in events endpoint. For a tutorial, refer to Tutorial: Use the Azure CLI and Azure portal to configure IoT Hub message routing 2. Register at least one Azure IoT Device or Azure IoT Edge Device to your Azure IoT Hub. For a tutorial, refer to Register an IoT Edge device in IoT Hub     Create Storage Account   Follow the Microsoft documentation to create an Azure Storage account.   NOTE: Select Blob storage as the account type and the Hot Access Tier.       Step 3: Import Extensions   The ThingWorx Azure IoT Hub Connector distribution bundle comes packaged with all the software you will need to connect ThingWorx and Azure. Download the Azure IoT Hub Connector from PTC Support Extract the application bundle to a directory on the system where it will run. (where v.v.v represents the release number) On Linux, this guide uses the base directory, /opt. The subdirectories and files should reside in the directory, /opt/ThingWorx-Azure-IoTHub-Connector-4.2.0. On Windows, extract the bundle so that the subdirectories and files reside in C:\ThingWorx-Azure-IoT-Hub-Connector-4.2.0 In the lower-left side of Composer, click Import/Export, then Import.  In the Import From File pop-up, under Import Option select Extension from the drop-down, then click Browse. Navigate to the /extensions directory and click on ConnectionServicesExtension-2.2.4.zip file.   Click Import in the Import From File pop-up, then click Close after file is successfully imported. Repeat the above steps for the azure-iot-hub-adapter-extension-4.2.0.4.zip file. Follow the steps to Create an Application Key and note the value, it will be used in the the next step.     Step 4: Install Azure Connector   Configure Connection Server   The Connection Server component of ThingWorx Azure IoT Connector must be configured with information specific to both your Azure IoT Hub and your ThingWorx Foundation server.   Copy the file azure-iot-sample.conf in the connector > conf directory and save the file with the name azure-iot.conf Edit the configuration file to replace the ten placeholder values for the parameters listed below with values copied from your Azure control panel. consumerPolicyName - Whether you have a new or existing hub, you need to provide the name of the consumer policy and its related Primary or Secondary key. The policy to select is typically the built-in, pre-defined policy called service. Navigate to All Resources > your hub > Settings > Shared access policies > service. If you added a custom service policy to your hub with the permission service connect select that policy.   consumerPolicyKey - Once you find the policy name, stay in the Shared access policies screen, and click the name. Copy the content of the Primary key. This key supplies the credentials to access services that are specified in the related policy. registryPolicyName - Specify a policy name that is related to the registryPolicyKey. This policy is typically a built in, predefined policy called registryReadWrite, but it is possible to use a custom policy if you add it to your hub. The shared access policy requires the registry read and registry write permissions. registryPolicyKey - The key that supplies credentials to access services in the policy specified in registryPolicyName. hubName - A name that defines the Azure IoT Hub related to this ThingWorx Connector. This Azure IoT Hub manages your things and their related messages. Hubs can be scaled via hub units at different price tiers per unit. Hubs are related to a resource group, which is related to a subscription Id and a cloud Location. TIP: To find the name associated with your Azure IoT Hub that the ThingWorx Connector will use to communicate with it, navigate to All Resources > your hub > Settings > Properties > NAME eventHubName - The Event Hub-compatible name that is used by SDKs and integrations that expect to read from Event Hubs. An Event Hub is an internal component of an Azure IoT Hub that handles device-to-cloud events for related things. In many cases, the IoT Hub name and Event Hubcompatible name are the same, so this property defaults to the Azure IoT Hub name (hubName). Navigate to All Resources > your hub > Settings > Built-in endpoints > Events > Event Hub-compatible name to find this name.   eventHubNamespace - To find the endpoint that is used by SDKs and integrations that expect to read from Event Hubs, navigate to All resources > your hub > Settings > Built-in endpoints > Events > Event Hub-compatible endpoint, and copy the host name, without the rest of the address (".servicebus.windows.net"). The ThingWorx Azure IoT Connector uses this endpoint to read messages from your hub. consumerGroup - To find a consumer group name to enable the Connector to pull data from the Azure IoT Hub, navigate to All Resources > your hub > Messaging > Endpoints > Built-in endpoints > Events > Consumer groups. To use the $Default consumer group, set this property to null. hubHostname - To find the host name for your hub, navigate to All Resources > your hub > Overview > Hostname. The host name is defined by the hubName plus a domain name that is chosen by Azure, typically azuredevices.net.   blob-storage.account-name - The blob-storage section specifies the settings for an Azure blob storage account. The storage provides containers that are used for device export of an Azure IoT Hub to ThingWorx via the Connector and can also be created by the Connector if you create AzureStorageContainerFileRepository things in ThingWorx. If you do not have one, create a Storage Account in the Azure portal. To find the name of an existing account, navigate to Settings > Access Keys > Storage account name.   blob-storage.account-key - The key to associate with the name of the blob storage account. To find the key for an existing account, navigate to Settings > Access Keys > Primary or secondary key Enter your ThingWorx Foundation server host, port, and appKey  in the transport.websockets section. transport.websockets { app-key = "6d70dfca-fe88-4d8c-83aa-686449b52cb2" platforms = "ws://45.23.12.112:80/Thingworx/WS" }   NOTE: If you are using an SSL connection to your ThingWorx Foundation server use wss in place of ws in the platform parameter. If the URL for the ThingWorx Foundation server does not include a port, use 80 for http connections and 443 for https.       Step 5: Launch IoT Hub Connector   Open a shell or a command prompt window. On a Windows machine, open the command prompt as Administrator. The AZURE_IOT_OPTS environment variable must be set before starting the Azure IoT Hub Connector. Below are sample commands using the default installation directory. On Windows: set AZURE_IOT_OPTS=-Dconfig.file=C:\ThingWorx-Azure-IoT-Connector-<version>\azure-iot-<version>-application\conf\azure-iot.conf -Dlogback.configurationFile=C:\ThingWorx-Azure-IoT-Connector-<version>\azure-iot-<version>-application\conf\logback.xml On Linux: export AZURE_IOT_OPTS="-Dconfig.file=/var/opt/ThingWorx-Azure-IoT-Connector-<version>/azure-iot-<version>-application/conf/azure-iot.conf -Dlogback.configurationFile=/var/opt/ThingWorx-Azure-IoT-Connector-<version>/azure-iot-<version>-application/conf/logback.xml" NOTE: You must run the export command each time you open a shell or command prompt window. Change directories to the bin subdirectory of the Azure IoT Hub Connector installation. Start the Azure IoT Hub Connector with the appropriate command for your operating system. On Windows: azure-iot.bat On Linux: /azureiot NOTE: On Windows you may have to shorten the installation directory name or move the bin directory closer to the root directory of your system to prevent exceeding the Windows limit on the classpath length.   The Connection Server should start with no errors or stack traces displayed. If the program ends, check the following: Java version is 1.8.0, update 92 or greater and is Java(TM) not OpenJDK. Open azure-iot.conf and confirm ThingWorx Foundation is set to the correct URL and port. Confirm the platform scheme is ws if http is used to access ThingWorx. Confirm all Azure credentials are correct for your Azure account. In ThingWorx Foundation click the Monitoring tab then click Connection Servers. You should see a server named azure-iot-cxserver-{server-uuid}, where {server-uuid} is a unique identifier that is assigned automatically to the server.       Step 6: Import Device from Azure   With the ThingWorx Azure IoT Connector, you can import into ThingWorx any existing devices that are currently provisioned to the Azure IoT Hub.   Add Device Azure IoT Hub   If you have not provisioned any devices to your Azure IoT Hub you can learn more about Azure IoT Hub device identity before following the steps below to create a test device.   In your Azure Portal, click All Resources, then select the name of your IoT Hub. Under Explorers click IoT devices, then click + Add. Enter a name for your device, then click Save. When the device name appears in the list it is ready to use.   Import Device into ThingWorx   We will manually execute a service in ThingWorx that will import Azure IoT Hub devices into ThingWorx.   In ThingWorx Composer, navigate to the ConnectionServicesHub Thing. Click Services tab and scroll to the ImportAzureIotDevices service and click the execute Arrow. NOTE: The * in the pattern field will act as a wildcard and import all devices, you can enter a string to match that will only import a subset of all available devices. Click Execute to import the devices then click Done. Click Things in the left column to see the Things that were created.   Click here to view Part 2 of this guide.
View full tip
  Step 8: Configure Template File (Service)   Services are implemented as Lua functions. In our Lua script, Services are divided into two pieces. The first is the Service definition which consists of a Service name, inputs and output. The second part of defining a Service is the service code. The Service code is run when you execute the service.   Create Service Definition Open the PiTemplate.lua file. Append the service definition to the file. Create a service named GetSystemProperties that gets the system properties (cpu temperature, clock frequencies, voltages) from your Raspberry Pi and updates the respective properties on the Thingworx platform. Specify your output type but not the name because the name of every output from a ThingWorx service is always result. serviceDefinitions.GetSystemProperties( output { baseType="BOOLEAN", description="" }, description { "updates properties" } ) NOTE: This service has no input parameters and an output that results in True if the properties were successfully updated on Thingworx.   Create Service code The Service code is run when you execute the Service. Functions in Lua are variables therefore to define the Service code, you will create a variable. The name of the Service has to match the name you specified in the Service definition.   Copy the service code below with comments explaining the logic and add append it to your template file, or download and unzip the full PiTemplate.zip attached here. services.GetSystemProperties = function(me, headers, query, data) log.trace("[PiTemplate]","########### in GetSystemProperties#############") queryHardware() -- if properties are successfully updated, return HTTP 200 code with a true service return value return 200, true end function queryHardware() -- use the vcgencmd shell command to get raspberry pi system values and assign to variables -- measure_temp returns value in Celsius -- measure_clock arm returns value in Hertz -- measure_volts returns balue in Volts local tempCmd = io.popen("vcgencmd measure_temp") local freqCmd = io.popen("vcgencmd measure_clock arm") local voltCmd = io.popen("vcgencmd measure_volts core") -- set property temperature local s = tempCmd:read("*a") s = string.match(s,"temp=(%d+\.%d+)"); log.debug("[PiTemplate]",string.format("temp %.1f",s)) properties.cpu_temperature.value = s -- set property frequency s = freqCmd:read("*a") log.debug("[PiTemplate]",string.format("raw freq %s",s)) s = string.match(s,"frequency%(..%)=(%d+)"); s = s/1000000 log.debug("[PiTemplate]",string.format("scaled freq %d",s)) properties.cpu_freq.value = s -- set property volts s = voltCmd:read("*a") log.debug("[PiTemplate]",string.format("raw volts %s", s)) s = string.match(s,"volt=(%d+\.%d+)"); log.debug("[PiTemplate]",string.format("scaled volts %.1f", s)) properties.cpu_volt.value = s end tasks.refreshProperties = function(me) log.trace("[PiTemplate]","~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In tasks.refreshProperties~~~~~~~~~~~~~ ") queryHardware() end Save the PiTemplate.lua file. cntrl x     Step 9: Run LSR   1. Navigate from installation directory to microserver directory. cd microserver 2. Ensure that wsems is running in a separate terminal session before you start running LuaScriptResource. 3. Ensure that you have ownership to the executable luaScriptResource and executable privileges. To check ownership: Ls -la -rwxrwxr-x 1 pi pi 769058 Jun 9 17:46 luaScriptResourc NOTE: The owner of luaScriptResource should be the user you are logged in as on the Raspberry Pi.   4. Confirm you have executable privileges by running the following command: sudo chmod 775 luaScriptResource 5. Run LuaScriptResource by executing the following command: sudo ./luaScriptResource   6. The output will show an error until we create the corresponding Thing in the next step.     Step 10: Bind Remote Thing Properties   Now we need to register a Thing so your Raspberry Pi can bind to its Properties on the Thingworx Platform.   Create a Thing named PiThing that will bind the scripts.PiThing created in config.lua . Open your Composer screen. Click Things on the left-navigation and the + symbol. Enter PiThing in the Name field and click RemoteThing in the Thing Template field.   Click Save. Ensure that the Remote Thing Property is connected. Click Properties in the left-hand navigation. Verify that the isConnected Property has a value of true. This means that your Raspberry Pi is still connected and now bound to this Thing on Thingworx Platform.   Bind the remote Thing Properties. Make sure the Properties tab is selected and click Edit at the top of the PiThing. Click Manage Bindings. Select the Remote tab at the top.   Click Add All Above Properties or drag and drop the ones you need. Click Done. Click Save. Verify that the Properties were updated with readings from the Raspberry Pi. Both the Value and Default Value for the three Properties will be set to the current reading from the Raspberry Pi. Cover the Raspberry Pi and wait about a minute, then Select the Properties tab and click Refresh. You will see the cpu_temperature value change.     NOTE: The system properties from your Raspberry Pi are now being passed to the server every 30 seconds. Wait a couple of cycles to see if the values from the Raspberry Pi change. If you are impatient, manually change the value of the property using the Set button in the Composer then click Refresh to see the updated value. The value will be temporarily updated for about 30 seconds until the Raspberry Pi reports the current live value.   Troubleshooting Tips   Tip #1 If the properties are not updating, try to stop and start both the wsems and luaScriptResource services.   quit sudo ./wsems or ./luaScriptResource Tip #2 If a wsems and/or luaScriptResource is not shut down gracefully, sometimes the service is still running which can cause issues. You can search and kill any wsems/luaScriptResource services by using the following command. Re-run the GetSystemProperties to test if this fixed the issue.   ps -efl kill -9 <id#>   Step 11: View Data from Devices   In order to demonstrate how ThingWorx can render a visualization of data from connected devices, at this point in the lesson you will import a pre-configured Mashup.   On the ThingWorx server that the EMS is connected to, start on the Home tab of Composer. Import a pre-built Mashup. Download and save the pre-built Mashup XML file attached here: Mashups_PiThingMashup-v91.xml. In Composer, click the Import/Export drop-down at the bottom-left.   Click Import. Leave all default values and click Browse to select the Mashups_PiThingMashup-v91.xml file that you just downloaded. Click Open, then Import, and once you see the success message, click Close. View Mashup displaying live data. Select the home icon in the top left side of Composer, then click Mashups on the left-navigation panel. Click Mashups_PiThingMashup-v91 and you'll see the design view of the Mashup.   Click View Mashup, and you'll see the live Mashup.    TIP: You will need to allow pop-ups in your browser for the Mashup to be displayed.     Click here to view Part 4 of this guide. 
View full tip
Check out this new KCS article which links to all known best practice documents available for ThingWorx. This article will get larger in time as more articles are published related to the Dos and Don'ts of building an IoT application! Do you know when to use timers, and where to implement their subscriptions? How about ensuring info tables are used at the proper time, and data tables at others? Pesky performance issues wherein ThingWorx runs slow for apparently no reason? All of these questions and more are addressed here!
View full tip
This video will walk you through the first steps of how to set-up Analytics Manager for Real-Time Scoring. More specifically this video demonstrates how to create an Analysis Provider and start ThingPredictor Agent. NOTE: For version 8.1 the startup command for the Agent has changed view the command in PTC Help center.   Updated Link for access to this video:  ThingWorx Analytics Manager: Create an Analysis Provider & Start the ThingPredictor Agent                                  
View full tip
  Connect Devices and Equipment Using Industry-Standard Protocol Drivers in ThingWorx   GUIDE CONCEPT   This guide has step-by-step instructions for connecting ThingWorx Kepware Server to ThingWorx Foundation.   This guide will demonstrate the ease of connecting edge industrial equipment to ThingWorx Foundation without installing any software on production equipment.   We will also show how connecting different systems and devices improves operations through the creation of business intelligence.     YOU'LL LEARN HOW TO   Connect ThingWorx Kepware Server to ThingWorx Foundation Secure the connection with an Application Key Create an IndustrialGateway Thing Map ThingWorx Kepware Server Tags to ThingWorx Foundation Thing Properties Visualize Data from connected digital assets   Note: The estimated time to complete this guide is 30 minutes     Step 1: Installation   Download the ThingWorx Kepware Server executable application from MyKepware. If you desire installation instructions, you may find them in the attached guide: install-thingworx-kepware-server.zip . Navigate to START -> PTC. Click ThingWorx Kepware Server 6 Configuration.                           For additional information on ThingWorx Kepware Server, click Help -> Server Help on the Menu Bar.         Step 2: Create Gateway   To make a connection between ThingWorx Kepware Server and Foundation Server, you must first create a Thing.   WARNING: To avoid a timeout error, create a Thing in ThingWorx Foundation BEFORE attempting to make the connection in ThingWorx Kepware Server.   In ThingWorx Foundation Composer, click Browse > Modeling -> Things.   Click + NEW. In the Name field, type IndConn_Server, including matching capitalization. In the Description field, enter an appropriate description, such as Industrial Gateway Thing to connect to ThingWorx Kepware Server. If the Project field is not already set, search for and select PTCDefaultProject. In the Base Thing Template field, search for and select IndustrialGateway.   Click Save.     Step 3: Create an AppKey   To secure the connection between ThingWorx Foundation and ThingWorx Kepware Server, you need to utilize an Application Key.   In ThingWorx Foundation, click Browse > Security -> Applications Keys.   Click + New. In the Name field, type IndConn_AppKey. If Project is not already set, search for and select PTCDefaultProject. Set User Name Reference to the Administrator User. Click Yes on the warning pop-up.   Assign an Expiration Date that is far enough in the future to not interfere with your trial.   Click Save.   Under Key ID, click the page icon to the right of the Application Key string to copy it.     Step 4: Connect to Foundation   Now that you’ve created an IndustrialGateway Thing and an Application Key, you can configure ThingWorx Kepware Server to connect to ThingWorx Foundation.   Return to the ThingWorx Kepware Server Windows application. Right-click Project. Select Properties….   In the Property Editor pop-up, click ThingWorx. In the Enable field, select Yes from the drop-down. In the Host field, enter the IP address or URL of your ThingWorx Foundation server. Enter the Port number. If you are using the "hosted" Developer Portal trial, enter 443.     In the Application Key field, copy and paste the Application Key you just created. In the Trust self-signed certificates field, select Yes from the drop-down. In the Trust all certificates field, select Yes from the drop-down. In the Disable encryption field, select No from the drop-down if you are using a secure port. Select Yes if you are using an http port. Type IndConn_Server in the Thing name field, including matching capitalization. If you are connecting with a remote instance of ThingWorx Foundation and you expect any breaks or latency in your connection, enable Store and Forward. Click Apply in the pop-up. Click Ok.   In the ThingWorx Kepware Server Event window at the bottom, you should see a message indicating Connected to ThingWorx.     NOTE: If you do not see the "Connected" message, repeat the steps above, ensuring that all information is correct. In particular, check the Host, Port, and Thing name fields for errors.     Click here to view Part 2 of this guide.
View full tip
One of the issues we have encountered recently is the fact that we could not establish a VNC Remote session. The edge was located outside of the internal network where the Tomcat was hosted, and all access to the instance was through an Apache reverse proxy. The EMS was able to connect successfully to the Server, because the Apache had correctly setup the Websocket forwarding through the following directive: ProxyPass "/Thingworx/WS/"  "wss://192.168.0.2/Thingworx/WS" However, we saw that tunnels immediately closed after creation and as a result (or so we thought), we could not connect from the HTML5 VNC viewer. More diagnostics revealed that you need to have ProxyPass directives for the following: -the EMS will make calls to another WS endpoint, called WSTunnelServer. After you setup this, the EMS will be able to create tunnels to the server. -the HTML5 VNC page will make a "websocket" call to yet another WS endpoint, called WSTunnelClient. Only at this step you have the ability to successfully use tunnels through a reverse proxy. Hope it helps!
View full tip
About This is the third part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry for encrypted communication and trust via certificates. We will create a custom Root and Intermediate Certificate Authority as well as a server specific certificate. More information on the theory of Trust & Encryption can be found here: Trust &amp; Encryption - Theory As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. Preparing the environment To get started and keep all the working materials safe and secure within the scope of the pi user account, let's start creating a workspace directory and then install the openssl package cd ~ mkdir certs chmod 700 certs cd certs sudo apt-get install openssl Troubleshooting Tips Keytool To look at a specific certificate use the keytool and specify the .crt file to look at keytool -printcert -file rootca.crt This will show information on the certificate's configuration, such as signing authority or validity. Timestamps It's important, that the Chain of Trust follows a timely fashion. The Root CA has the longest expiration date. Within its validity the Intermediate CA must be valid. The server specific certificate must be valid in the timeframe of the Intermediate CA validity. As the validity is calculated based on seconds of the current (signing) action, I'm using valid days of Root CA 999 days Intermediate CA 888 days Server specific certificate 777 days Any other times, that make sense can be used, as long as the time constraints are followed. Otherwise the certificate will not be recognized as valid and will therefore not be trusted. Creating a Chain of Trust For this example we'll create a Root and Intermediate Certificate Authority (CA) as well as the server specific certificate. In a first step we can create all of the required keys. openssl genrsa -des3 -out rootca.key 4096 openssl genrsa -out intermediateca.key 4096 openssl genrsa -out server.key 4096 The -des3 option for the rootca will store a password and encrypt it within the key for more secure access. This tightens up security, so that the key for the rootca is not easily corrupted or exposed. Root Certificate Authority Having the keys, we now create the Root CA itself openssl req -new -x509 -sha256 -days 999 -key rootca.key -reqexts v3_req -extensions v3_ca -out rootca.crt Enter the information, starting with the password for the Root CA key. Use a easy to identify name for the Common Name (CN), e.g. "My Root CA" Intermediate Certificate Authority As the Intermediate CA needs to be signed / approved by the Root CA, we need a Certficiate Signing Request (CSR) openssl req -new -key intermediateca.key -reqexts v3_req -extensions v3_ca -out intermediateca.csr The v3_ca extension will mark this certificate as CA itself. This means that we can use the intermediate CA to sign other certificates as well. The CSR will be signed with the Root CA's key and certificate. For this the CSR must be sent to the whoever own the Root CA. In our case, we're the owner ourselves and have everything in the same directory, so no need to send it off to Let's Encrypt, Google or VeriSign. For signing the Intermediate CA as an actual CA, the Root CA must have a specific configuration to allow the v3_ca extension. This needs to be created via nano v3_ca.ext Into this new file, copy & paste the following: basicConstraints        = CA:TRUE subjectKeyIdentifier    = hash authorityKeyIdentifier  = keyid:always,issuer:always keyUsage                = cRLSign, dataEncipherment, digitalSignature, keyCertSign, keyEncipherment, nonRepudiation Save and exit. Finally, the Root CA is signing the Intermediate CA, using the v3_ca extension via: openssl x509 -req -in intermediateca.csr -CA rootca.crt -CAkey rootca.key -CAcreateserial -CAserial intermediateca.srl -extfile v3_ca.ext -days 888 -sha256 -out intermediateca.crt Certificate Authority Chain Having now two CAs, they need to be chained together. This is required for later to create the keystore used in Tomcat. Creating the Chain is probably the easiest part of this blog: cat intermediateca.crt rootca.crt > cachain.crt Server Specific Certificate The server specific certificate has to be signed and obtained by the Intermediate CA; for this a CSR is required. openssl req -new -key server.key -out server.csr The request is then signed with the Intermediate CA's key and certificate. For commerical certificates this signing process will be done by publicly trusted CAs, like Let's Encrypt, VeriSign or Google. In our case, we're the Intermediate CA ourselves and can sign the CSR. openssl x509 -req -in server.csr -CA intermediateca.crt -CAkey intermediateca.key -CAcreateserial -CAserial server.srl -days 777 -sha256 -out server.crt Creating Keystores For the keystore as used in Tomcat we need the CA Chain as well as the server specific certificate and the corresponding key. To generate a keystore that can be used in Tomcat, we first need to create a PKCS12 type keystore with openssl openssl pkcs12 -export -certfile cachain.crt -in server.crt -inkey server.key -out server.p12 This PKCS12 keystore can now be transformed into a JKS keystore, by importing the PKCS12 information into a new JKS keystore keytool -importkeystore -srckeystore server.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS Important: the passwords for the PKCS12 keystore and the JKS keystore must match! Tomcat requires those passwords to be same - in case they are different, the configuration will not work and ThingWorx cannot be accessed through a secure connection. Configuring ThingWorx For ThingWorx, only the keystore is required. It can be copied over e.g. to certificates area in the storage directory. sudo cp keystore.jks /thingworx/storage/certificates sudo chmod 640 /thingworx/storage/certificates/keystore.jks sudo chown tomcat:tomcat /thingworx/storage/certificates/keystore.jks To enable it, Tomcat's server.xml needs to be updated. sudo nano $CATALINA_HOME/conf/server.xml Find the connector for port 80 and add a new connector after the port 80 connector: <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol"            maxThreads="150" connectionTimeout="20000"  redirectPort="8443"            SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLSv1.2" enableLookups="false"            keystoreFile="/thingworx/storage/certificates/keystore.jks" keystorePass="<keystorePass>" /> Don't forget to update the password. Stop Tomcat sudo service tomcat stop Ensure Tomcat is actually down with ps -ef | grep tomcat If it's still running, just sudo kill it and start Tomcat again to activate the new configuration sudo service tomcat start And now? Congratulations! ThingWorx is now running on your ThingBerry using a secure channel. Access it via https://<thingberry>/Thingworx As HTTPS is automatically using port 443, you will automatically connect to the port configured in the server.xml Classic HTTP connections on port 80 can still be used. For more information about setting up ThingWorx with (self-signed) certificates also see https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 - there's also a note on forwarding all HTTP traffic to HTTPS. See also https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS246292 for more information about using specific SSL protocols or cipher suites.
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
Video Author:                     John Greiner Original Post Date:            January 3, 2017 Applicable Releases:        ThingWorx Analytics 52.0 to 8.0   Description: This video walks you through how to access the ThingWorx Analytics Interactive API Guide.       Get the IP address of the ThingWorx Analytics Server Type:  ip a   Put that IP address into the desired web browser Your IP address may be different from the one in the picture above Add the port number of the server to the end of the IP address The Default  port number is 8080 Make sure to put a colon " : " between the end of the IP address and the start of the port number The port number could be different in some cases, depending if it was configured differently during installation Hit Enter and the main page will load.  
View full tip
Finally there is an article which combines all of the available resources on certificate configuration to better enable developers to complete their production-worthy edge devices. Please see the official PTC documentation located here. Please feel free to comment with any questions, comments, or feedback on this! Happy developing!
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
When we do connections to JDBC connected databases, as much as possible we want to leverage the Database Server side capabilities to do the querying, aggregating and everything else for us. So if we need to do filtering or math add it to the query statement so it is done server side, by the Database server not the Thingworx runtime server. Besides that there is much more that can be done, since databases are powerful (all the good stuff like joins, union, distinct, generated and calculated fields and what not) The more we can do Database server side the better it is for the Thingworx runtime performance. Now everyone hopefully knows the [[  ]] parameter substitution. So we can easily build an SQL service that has several input parameters, it will look like: Select * from table where item1=[[par1]] AND item2 =[[par2]] etc. But we can take this up a notch with the super powerful yet super dangerous <<   >> Now we can do a service that just says <<sqlQuery>> and use another service to build something like: select * from table where item1 in “val1,val2,val3” etc. If you can avoid it, only use [[  ]] but if needed there always is <<   >> but you must make sure that you properly secure that service with the system user and validate the service that invokes this service, since <<  >> is vulnerable to SQL Injection
View full tip
  Step 4: Write Data to External Database   You’ve connected to the database, you’re able to query the database. Now let’s handle inserting new data into the database. The update statements and data shown below are based on the table scripts provided in the download. Examples of how the ThingWorx entity should look can be seen in the SQLServerDatabaseController and OracleDatabaseController entities that you've downloaded   Running an Insert   Follow the steps below to set up a helper service to perform queries for the database. While other services might generate the query to be used, this helper service will be your shared execution service. In the DatabaseController entity, go to the Services tab.     2. Create a new service of type SQL (Command) called RunDatabaseCommand. 3. Keep the Output as Integer. 4. Add the following parameter:   Name Base Type Required command String True         5. Add the following code to your new service:   <<command>>       6. Click Save and Continue. Your service signature should look like the below example.     You now have a service that can run commands to the database. Run your service with a simple insert.   There are two ways to go from here. You can either query the database using services that call this service, or you can create more SQL Command services that query the database directly. Let’s go over each method next, starting with a service to call the helper.   In the Services tab of the DatabaseController entity, create a new service of type JavaScript. Name the service JavaScriptInsert_PersonsTable. Set the Output as InfoTable, but do not set the DataShape for the InfoTable. Add the following code to your new service: try { var command = "INSERT INTO Persons (person_key, person_name_first, person_name_last, person_email, person_company_name, " + "person_company_position, person_addr1_line1, person_addr1_line2, person_addr1_line3, person_addr1_city, person_addr1_state, " + "person_addr1_postal_code, person_addr1_country_code, person_addr1_phone_number, person_addr1_fax_number, person_created_by, " + "person_updated_by, person_created_date, person_updated_date) VALUES ('" + key + "', '" + name_first + "', '" + name_last + "', '" + email + "', '" + company_name + "', '" + company_position + "', '" + addr1_line1 + "', '" + addr1_line2 + "', '" + addr1_line3 + "', '" + addr1_city + "', '" + addr1_state + "', '" + addr1_postal_code + "', '" + addr1_country_code + "', '" + addr1_phone_number + "', '" + addr1_fax_number + "', '" + created_by + "', '" + updated_by + "', '" + created_date + "', '" + updated_date + "')"; logger.debug("DatabaseController.JavaScriptInsert_PersonsTable(): Query - " + command); var result = me.RunDatabaseCommand({command: command}); } catch(error) { logger.error("DatabaseController.JavaScriptInsert_PersonsTable(): Error - " + error.message); }         5. Add the following parameter:   Name Base Type Required key String True name_first String True name_last String True company_name String True company_position String True addr1_line1 String True addr1_line2 String True addr1_line3 String True addr1_city String True addr1_state String True addr1_postal_code String True addr1_country_code String True addr1_phone_number String True addr1_fax_number String True created_by String True updated_by String True created_date String True updated_date String True         6. Click Save and Continue.   Any parameter, especially those that were entered by users, that is being passed into a SQL Statement using the Database Connectors should be fully validated and sanitized before executing the statement! Failure to do so could result in the service becoming an SQL Injection vector.   Now, let’s utilize a second method to create a query directly to the database. You can use open and close brackets for parameters for the insert. You can also use <> as a method to mark a value that will need to be replaced. As you build your insert statement, use [[Parameter Name]] for parameters/variables substitution and <<string replacement >> for string substitution.   1. In the Services tab of the DatabaseController entity, create a new service of type SQL (Command).   2. Name the service SQLInsert_PersonsTable. 3. Add the following code to your new service: INSERT INTO Persons (person_key ,person_name_first ,person_name_last ,person_email ,person_company_name ,person_company_position ,person_addr1_line1 ,person_addr1_line2 ,person_addr1_line3 ,person_addr1_city ,person_addr1_state ,person_addr1_postal_code ,person_addr1_country_code ,person_addr1_phone_number ,person_addr1_fax_number ,person_created_by ,person_updated_by ,person_created_date ,person_updated_date) VALUES ([[key]] ,[[name_first]] ,[[name_last]] ,[[email]] ,[[company_name]] ,[[company_position]] ,[[addr1_line1]] ,[[addr1_line2]] ,[[addr1_line3]] ,[[addr1_city]]]] ,[[addr1_state]] ,[[addr1_postal_code]] ,[[addr1_country_code]] ,[[addr1_phone_number]] ,[[addr1_fax_number]] ,[[created_by]] ,[[updated_by]] ,[[created_date]] ,[[updated_date]]);       4. Add the following parameter:   Name Base Type Required key String True name_first String True name_last String True company_name String True company_position String True addr1_line1 String True addr1_line2 String True addr1_line3 String True addr1_city String True addr1_state String True addr1_postal_code String True addr1_country_code String True addr1_phone_number String True addr1_fax_number String True created_by String True updated_by String True created_date String True updated_date String True         5. Click Save and Continue.   Examples of insert services can be seen in the provided downloads.     Step 5: Executing Stored Procedures   There will be times when a singluar query will not be enough to get the job done. This is when you'll need to incorporate stored procedures into your database design.   ThingWorx is able to use the same SQL Command when executing a stored procedure with no data return and a SQL query when executing a stored procedure with an expected result set. Before executing these services or stored procedures, ensure they exist in your database. They can be found in the example file provided.   Execute Stored Procedure   Now, let's create the service to handle calling/executing a stored procedure.   If you are expecting data from this stored procedure, use EXEC to execute the stored procedure. If you only need to execute the stored procedure and do not expect a result set, then using the EXECUTE statement is good enough. You're also able to use the string substitution similar to what we've shown you in the earlier steps.   In the DatabaseController entity, go to the Services tab. Create a new service of type SQL (Command) called RunAssignStudentStoredProcedure. Add the following parameter:   Name Base Type Required student_key String True course_key String True         4. Add the following code to your new service:   EXECUTE AddStudentsToCourse @person_key = N'<<person_key>>', @course_key = N'<<course_key>>'; You can also perform this execute in a service based on JavaScript using the following code: try { var command = "EXECUTE AddStudentsToCourse " + " @student_key = N'" + student_key + "', " + " @course_key = N'" + course_key + "'"; logger.debug("DatabaseController.RunAssignStudentStoredProcedure(): Command - " + command); var result = me.RunDatabaseCommand({command:command}); } catch(error) { logger.error("DatabaseController.RunAssignStudentStoredProcedure(): Error - " + error.message); }         5. Click Save and Continue.   Execute Stored Procedure for Data   Let's create the entity you will use for both methods. This can be seen in the example below:     In the DatabaseController entity, go to the Services tab. Create a new service of type SQL (Query) called GetStudentCoursesStoredProcedure. Set the Output as InfoTable, but do not set the DataShape for the InfoTable. Add the following parameter:   Name Base Type Required course_key String True         5. Add the following code to your new service:   EXEC GetStudentsInCourse @course_key = N'<<course_key>>'   You can also perform this execute in a service based on JavaScript using the following code:   try { var query = "EXEC GetStudentsInCourse " + " @course_key = N'" + course_key + "'"; logger.debug("DatabaseController.GetStudentCoursesStoredProcedure(): Query - " + query); var result = me.RunDatabaseQuery({query:query}); } catch(error) { logger.error("DatabaseController.GetStudentCoursesStoredProcedure(): Error - " + error.message); }       6. Click Save and Continue.   You've now created your first set of services used to call stored procedures for data. Of course, these stored procedures will need to be in the database before they can successfully run.     Click here to view Part 3 of this guide.
View full tip
Announcements