cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get an answer that solved your problem? Please mark it as an Accepted Solution so others with the same problem can find the answer easily. X

Vuforia Studio and Chalk Tech Tips

Sort by:
How can ThingWorx (external) data be used to update an experience in real-time? For example, if the data does not fall within a specified range, warning messages will be shown automatically.     In ThingWorx, in Thing, create a Service to check the range and to determine if a warning should be displayed or not. In Vuforia Studio, in the Project, in DATA panel, under External Data section, add the Service. Under Configuration section, check all checkboxes related to refresh to call this Service. Use the ServiceInvokeComplete Event to check the value reported by the Service        
View full tip
PDFs can be linked to experiences using a few methods. Below is an example of using the toggle widget or a toggle button to open and close a PDF within your experience.          Example of JavaScript code to add to Home.js file: $scope.toggleButton = function() { //if the toggle is pressed if ( $scope.view.wdg['toggleButton-1']['pressed']==true) { window.location='app/resources/Uploaded/%5BBD-Logbuch%5D20190208-20190310.pdf' console.log($scope.view.wdg['file-1']['url']) } //unpress the toggle button after 1,5 sec $timeout(function () { $scope.view.wdg['toggleButton-1']['pressed']=false;}, 1500); }      
View full tip
Vuforia Studio New 3D Button widget that supports HoloLens 2 articulated hand tracking (3D Eyewear projects only) Bug fixes and minor improvements Vuforia View Support for Microsoft HoloLens 2 Bug fixes and minor improvements Experience Service An 8.5.0 version of Experience Service was not released. However, Experience Service 8.4.6 will support the upcoming ThingWorx 8.5 release.  
View full tip
Vuforia Studio Bug fixes and minor improvements Vuforia View Bug fixes and minor improvements Experience Service  An 8.4.7 version of Experience Service was not released  
View full tip
Vuforia Studio Bug fixes and minor improvements Vuforia View Vuforia View 8.4.6 is required for viewing Experiences that include Model Targets created with Vuforia Studio 8.4.6 Improved detection and tracking of Model Targets and Spatial Targets Bug fixes and minor improvements Experience Service Support for Ubuntu 18.04 and RHEL 7.2 - 7.6 Bug fixes and minor improvements
View full tip
Often, we need to display some sections, or we required to have a view of a x-section of the model. This is in generally no part of the current functionality but there are some approaches which could be helpful. In this article I want to mention 3 different approaches, which could be used but no one of them is really perfect:   1.) uploading different models -  so we can use an additional models for each cut and then we can change the model if you want to display a cut, it means you can make the one model not visible and the display the second model or vice versa.  I tested with cuts created in Creo Illustrate 5.0 and Creo View 5.0 but it seems that Vuforia Studio could not display them /neither as part of sequence or as static to the current figure:     The only possible  way in this example is to create in Creo an assembly cut /with option cut also on part level  and  then create from there a new .pvz model.  In this case this seems to work fine: :     2.) the second approach is to remove components  step  by step,  so to see the inner components when the outer components are blanked: All different components, which should be displayed or blanked, need to be defined as modelItem where we can set the visible property to true or false Is also possible to blank or display  a list of components where the list is defined in json file. This could be done with JavaScript code. In this case we do not need to define a modelItem widgets.  For more information you can check the post    3.)  The last most powerful option is to use a shader for the model Widget. So for example we can use some kind of clipping functionality of the model where we can set the x min and x max value or ymin and ymax or zmin and zmax value what should be displayed. This will create a clipping effect. So only the geometry which satisfy this criteria will be displayed.     How to achieve a clipping functionality using a shader. 3.1) define a shader - this requires creating a tmlText widget where we can define the shader Code. The code should be inserted into the text property of the tml widget For the clipping along x axis with planes which are parallel to the  YZ plain we need to define the following javascript with GLSL:   <script name="slice_world_based_x" type="x-shader/x-vertex"> attribute vec3 vertexPosition; attribute vec3 vertexNormal; varying vec3 N; varying vec4 vertexCoord; uniform mat4 modelMatrix; uniform mat4 modelViewProjectionMatrix; void main() { vec4 vp = vec4(vertexPosition, 1.0); gl_Position = modelViewProjectionMatrix * vp; vertexCoord = modelMatrix*vp; // the surface normal vector N = vec3(normalize(modelMatrix* vec4(vertexNormal,0.0))); } </script> <script name="slice_world_based_x" type="x-shader/x-fragment"> // this setting is needed for a certain use of properties precision mediump float; // determine varying parameters varying vec3 N; varying vec4 vertexCoord; // determine shader input parameters uniform vec4 surfaceColor; uniform float slicex; uniform float slicewidth; const vec3 lightPos = vec3(1.0, 2.2, 0.5); const vec4 ambientColor = vec4(0.3, 0.3, 0.3, 1.0); void main() { // calc the dot product and clamp based on light position // 0 -> 1 rather than -1 -> 1 // ensure everything is normalized vec3 lightDir = -(normalize(lightPos)); vec3 NN = normalize(N); // calculate the dot product of the light to the vertex normal float dProd = max(0.0, dot(NN, -lightDir)); // calculate current color of vertex, unless it is being clipped... // only geometry with coordinates which satisfy the condition below // will be displayed if ( vertexCoord.x > (slicex + slicewidth/2.0) || vertexCoord.x < (slicex - slicewidth/2.0) ) { discard; } else { // calculate the color based on light-source and shadows on model gl_FragColor = (ambientColor + vec4(dProd)) * surfaceColor; } } </script> Save the tml Text widget .     3.2) You can also define shaders for clipping along the Y and the Z axis. Here you can use the same shader javascript definition but you need to change the shader name and to modify the if  check respectively to the Y or Z coordinate:   <script name="slice_world_based_y" type="x-shader/x-vertex"> ... // calculate current color of vertex, unless it is being clipped... if ( vertexCoord.y > (slicex + slicewidth/2.0) || vertexCoord.y < (slicex - slicewidth/2.0) ) { discard; } else ...   To set the clipping values for different axes and to control it by   sliders we can use some code like this:   $scope.DIR="y"; $scope.slice = function() { var slicexCur = ($scope.view.wdg['slider-1']['value']/100.0)-0.5; var slicewidthCur = ($scope.view.wdg['slider-2']['value']/100.0); $scope.view.wdg['modelx']['shader'] = "slice_world_based_"+$scope.DIR+";slicex f "+ slicexCur + ";slicewidth f " + slicewidthCur; } //////////////////////////////////////////// $scope.$on('$ionicView.afterEnter', function(){ // Anything you can think of $scope.clickY() }); /////////////////////////////////////// $scope.clickX=function() { $scope.view.wdg['toggle-X']['value'] =true $scope.view.wdg['toggle-Y']['value'] =false $scope.view.wdg['toggle-Z']['value'] =false $scope.DIR="x" } $scope.clickY=function() { $scope.view.wdg['toggle-X']['value'] =false $scope.view.wdg['toggle-Y']['value'] =true $scope.view.wdg['toggle-Z']['value'] =false $scope.DIR="y" } $scope.clickZ=function() { $scope.view.wdg['toggle-X']['value'] =false $scope.view.wdg['toggle-Y']['value'] =false $scope.view.wdg['toggle-Z']['value'] =true $scope.DIR="z" } ///////////////////////////////////////   Here the slice() function is called in the slider change event (for the both slider widgets). For better understanding of the functionality you can review the attached Vuforia Studio project Here attached the improved version for HL. It is also approved by dev team.  slice_example.zip https://community.ptc.com/sejnu66972/attachments/sejnu66972/tkb_vuforiatechtips/48/4/slice_example.z... Another version (also for HL) attached here is the example   slice_example_using_ABCD  project – where the dev team demonstrates an efficient way showing how we can define a cutting plane via plane ABCD parameters (refer to  plane geometry equations : geometry https://en.wikipedia.org/wiki/Plane_(geometry)   To cover this functionality the dev team developed some studio extensions that  PTC customers can use instead.   That extensions widgets will handle reflections etc, and I it is recommend to use  those extensions .: https://github.com/steveghee/OCTO_Studio_extensions https://github.com/steveghee/OCTO_effects_extensions  
View full tip
To ensure your Chalk experience is the best, make sure to familiarize yourself with the below best practices.   Initialization Keep movement fluid and slow Forward and backward smooth motion is best to allow device to create mapping Small circles in front of the object are also good Note: Do not rotate your device - keep the device's orientation fixed, moving it parallel to the object of interest and keeping the latter in view during initialization movement Environment It is important for the environment to have a lot of saliency, interesting features, & textures e.g. Stickers, buttons, cables, images/designs, shapes with corners, etc Stationary objects are best for Chalking Reflective, plain colored, or blank surfaces are not good for using Chalk Marks Well-lit areas are best for Chalk performance If an environment is too dark the device's camera will not be able to detect objects External light may be needed if the environment is too dark Either user can toggle the flash on Network/Bandwidth Low bandwidth will result in poor video quality Ensure that you have good bandwidth Chalk Marks Use simple drawings to communicate instructions Circles, lines, & arrows work best Delete Chalk Marks that are no longer needed to reduce clutter Use the pause button to draw on a steady image  
View full tip
Unfortunately, in the Vuforia Studio Documentation there is no complete List with the possible events which could be handled JS. Therefore for the first time this article tries to provide additional Information about known events :   1.) modelLoaded - is not required any more because the UI allow directly to specify this event. ... $rootScope.$on('modelLoaded', function() { //do some code here } ) .... 2.) Step completed example: scope.$on('stepcompleted', function(evt, arg1, arg2, arg3) { var parsedArg3 = JSON.parse(arg3); console.log("stepcompleted stepNumber="+parsedArg3.stepNumber + " nextStep="+parsedArg3.nextStep); $scope.app.stepNumber=parseInt(parsedArg3.stepNumber); $scope.app.nextStep=parseInt(parsedArg3.nextStep); $scope.app.duration=parseFloat(parsedArg3.duration); }); 3.) Event - stepstarted: ... $scope.$on('stepstarted', function(evt, arg1, arg2, arg3) { var parsedArg3 = JSON.parse(arg3); console.warn(arg3); console.log("stepstarted stepNumber="+parsedArg3.stepNumber); $scope.app.stepNumber=parseInt(parsedArg3.stepNumber); $scope.app.nextStep=parseInt(parsedArg3.nextStep); $scope.app.duration=parseFloat(parsedArg3.duration); }); ... Please, pay attention that on some platforms will not provide complete information  in stepstarted. So, In this case the complete info is available in 'stepcompleted' – the best is to test it.   4.) after entering in  a view in studio (e.g. Home ...):   ... $scope.$on('$ionicView.afterEnter', function() {$scope.populateModelList(); }); ... 5.) click/tap event on the current panel: ... $rootScope.$on('click', function() { tapCount++;console.log("click event called");} ); ... or  with coordinates ... document.addEventListener('click', function(event) {console.log("click() 1 called"); $scope.lastClick = { x: event.pageX, y: event.pageY}; }); ... you can also see this topic.   6.) New step  -example: ... $scope.$on('newStep', function(evt,arg) { var getStepRegex = /\((\d*)\//; console.log(arg); console.log( getStepRegex.exec(arg)[1]); //check what it prints to the console - the step number }); ...    7.) Here is also  a more advance construct- it defines a userpick event e.g. for all models widgets: angular.forEach($element.find('twx-dt-model'), function(value, key) { // search all twx-td-model's -> means all model widgets angular.element(value).scope().$on('userpick',function(event,target,parent,edata) { //for each model widget will set a userpick listener console.log('edata');console.warn(edata); console.log("JSON.parse(edata)");console.warn(JSON.parse(edata)); var pathid = JSON.parse(edata).occurrence; $scope.currentSelection = target + "-" + pathid; // this is the current selection - the selected component occurence // you can use it for example as shown below // try{ //tml3dRenderer.GetObject($scope.currentSelection).GetWidget().ApplyOccludeOpacity(OCLUDE_VAL,OPACITY_VAL); //} catch (e1234) {$scope.view.wdg['3DLabel-4']['text']= "e 1234exception in GetObject.GetWidget..."; } // } ) //end of the userpick defintion } ) //end of for each funciton  8.) tracking event:   ... $scope.$on('trackingacquired', function (evt,arg) { // alert('didStartTracking'); // this is not really needed $scope.message = parseInt($scope.app.params["currentStep"]); $scope.$apply(); }); $scope.$on('trackinglost', function (evt,arg) { // alert('didFinishTracking'); $scope.message = "Scan the ThingCode with your camera."; $scope.$apply(); }); ....   9.) popover event:     // var my_tmp = '<ion-popover-view><ion-header-bar> <h1 class="title">My Popover Title</h1> </ion-header-bar> <ion-content> My message here! </ion-content></ion-popoverview>'; $scope.popover= $ionicPopover.fromTemplate(my_tmp, { scope: $scope }); $ionicPopover.fromTemplateUrl('my-popover.html', { scope: $scope }).then(function(popover) { $scope.popover= popover; }); $scope.openPopover= function($event) { $scope.popover.show($event); }; $scope.closePopover= function() { $scope.popover.hide(); } //////////////destroy popover $scope.$on('$destroy', function() { $scope.popover.remove(); }); /////// hide popover $scope.$on('popover.hidden', function() { // your hide action.. }); // on remove popover $scope.$on('popover.removed', function() { // your remove action }); }); 10) watch event -watches are created using the $scope.$watch() function. When you register a watch you pass two functions as parameters to the $watch() function: 1)A value function 2)A listener function    When the value returned by function 1.) changes - this lead to execution of the funciton 2.) Example:   ... $scope.$watch(function(scope) { return $scope.view.wdg['label-1']['text'] }, // watches if change for the the text of label-1 //when changes then play a step for model-1 function() { console.log($scope.view.wdg["model-1"]); $scope.view.wdg["model-1"].svc.play; } ); ...   11.) Camera tracking - make sense only on mobile device- no sense for preview mode!   //// define tracingEvent only on end device tml3dRenderer.setupTrackingEventsCommand (function(target,eyepos,eyedir,eyeup) { // $scope.view.wdg['3DLabel-1']['text']="eyepos=("+eyepos[0].toFixed(2)+","+eyepos[1].toFixed(2)+","+eyepos[2].toFixed(2)+")"; $scope.app.params['target']=target; $scope.app.params['eyepos']="eyepos=("+eyepos[0].toFixed(2)+","+eyepos[1].toFixed(2)+","+eyepos[2].toFixed(2)+")"; $scope.app.params['eyedir']="eyedir=("+eyedir[0].toFixed(2)+","+eyedir[1].toFixed(2)+","+eyedir[2].toFixed(2)+")"; $scope.app.params['eyeup'] ="eyeup =("+ eyeup[0].toFixed(2)+","+ eyeup[1].toFixed(2)+","+ eyeup[2].toFixed(2)+")"; ///////////////////// },undefined); //// define tracingEvent only on end device } //end device   12.) There is also a sequenceloaded event, which is useful if you have a model with multiple sequences defined, and you are switching sequences dynamically in the experience.   $scope.$on("sequenceloaded", function (evt, arg) { console.log("sequence loaded, starting play"); $scope.setWidgetProp("loading","visible",false); $scope.app.fn.triggerWidgetService("model-1","playAll"); }); In this point is  here a good feedback comming from advance user (expert) : If you grep for "$emit" through a project folder, you can turn up the following event names: valueacquired (bar code scanner) usercanceled (bar code scanner) tracking modelloadfailed sequenceloaded newStep playstarted sequenceacknowledge playstopped sequencereset onReset If you grep for "$on(", you can find some additional ones: trackingacquired trackinglost modelLoaded click app-fn-navigate app-fn-show-modal app-fn-hide-modal $ionicView.afterEnter $stateChangeStart loaded3DObj loadedSeqErr loadedSeq $destroy select3DObj move3DObj loadError3DObj readyForZoom3DObj serviceinvoke stepstarted stepcompleted twx-entity twx-service-input twx-service-name This is a  good point and it seems that this list contains the most of the possible events.   Events Handling Feedbacks from EXTERNAL DATA  services  Such event is    the twx-service complete event. This event is called when we call a service registered in the external data and the service is completed. Here an example (also mention in the post😞 Here in the example in the external data the service LoadJSON was added.       ////////////////////////////////////////////////////////////// $scope.GetJsonFromTwxRepository = function(path) { $scope.$applyAsync(function() { $rootScope.$broadcast('app.mdl.CAD-Files-Repository.svc.LoadJSON', {"path":path} );} ,500 ); $scope.app.speak("after call of GetJsonFromTwxRepository") //in the modelloaded listener register // LoadJSON-complete event -> to laod the data into session rootScope.$on('modelLoaded', function() { //// $scope.$root.$on('LoadJSON-complete', function(event, args) { console.log("LoadJSON-complete event"); $scope.COMP_LOCs=args.data console.log(JSON.stringify( $scope.COMP_LOCs)) }); /// });   So the code above shows how to call from JavaScript the service added to the external data. This service should return the called JsonObject. The call is asynchronously so that when  thingworx  will come back the listener 'LoadJSON-complete' will be called and here will print the content of the JsonObject to the console. Here the listener is registered inside the modelload event (this is event is coming late – so to be on the save side that everything is already initialized) This is generally that you for any Thingworx services added to the External data <your_twx_service_name>-complete  the arg.data contains then the data which should be returned by the method.
View full tip
1.) The first point  here is to clarify : is it possible to extract model data of 3d models in Vuforia Studio?  ( data could be extracted by Creo View Toolkit apps but here is considered only the Vuforia Studio environment) Supposing , we have a model widget for an assembly model without explicit modelitem  widget definitions. The question is: Can we extract data for the components and if yes,  then what data we can extract? In Vuforia Studio Environment Extracting of data is possible only in Preview mode, because we have in preview mode the method tml3dRenderer.GetObject() where we can access a model object (a component) example:   let comp_widget=tml3dRenderer.GetObject(selection).GetWidget()   where the selection is some thing like "<modelname>-<compPath>" e.g. : "model-1-/0/0/3/2"   Then from the widget we can extract data:   var loc=tml3dRenderer.GetObject(selection).GetWidget().GetLocation() console.error("DEBUG getObj.GetWidget()") console.warn(tml3dRenderer.GetObject(selection).GetWidget())   When we   explore  the different methods in the crome debugging console,  we will find methods to  get or  to set  properties. To extract data, we can use the get... methods.   The methods of  tml3dRenderer.GetObject() seems currently not to work in Vuforia View on end devices (the tml3dRenderer object is a handle of the cordova vuforia plug in and it has a different implementation on the different end devices.In preview mode so far I know, the graphic is based on WebGL and Three.js)  Therefore we will be not able for example to get the data of a component selection on the end device. So means we need a way to extract data in Preview mode and make it available in the Vuforia view on the end device.  Here I did not find a methods to extract the original component name but I was able to create a list (json) with the position data ( I did not add color but this is possible to access it - e.g. tml3dRenderer.GetObject(selection).GetWidget().GetColor()) We can create a json  e.g. of  following data:   {"model-1-/0/0/0":{"valid":false,"orientation":{"x":0,"y":0,"z":0}, "size":{"x":1,"y":1,"z":1},"scale":{"x":1,"y":1,"z":1}, "position":{"x":9.999999998199587e-24,"y":9.999999998199587e-24,"z":9.999999998199587e-24}}, "model-1-/0/0":{"valid":false,"orientation":{"x":0,"y":0,"z":0},"size":{"x":1,"y":1,"z":1}, "scale":{"x":1,"y":1,"z":1},"position":{"x":0,"y":0,"z":0}}, "model-1-/0/0/2":{"valid":false,"orientation":{"x":0,"y":90,"z":0},"size":{"x":1,"y":1,"z":1}, "scale":{"x":1,"y":1,"z":1},"position":{"x":0,"y":0.029500000178813934,"z":-5.51091050576101e-18}}, ...}   we can  assign the json to a variable e.g. $scope.COMP_LOCs So later we can read the current position data on end device:   var selection_location=$scope.COMP_LOCs[l_currentSelection] //read the location data from json varible console.log("selection:"+l_currentSelection+"->X= "+ selection_location.position.x); //print it to console selection_location.position.x= round(parseFloat(selection_location.position.x) + 0.005,4) //add 0.005 shift and round to 4 dec   2.)In point 1.)  we checked how to  extract the data of an compoent (a selection) .But Actually  we have a couple of methods to extract the data but what we do not have is a valid  selection of an assembly  component . This is required to obtain a valid modelitem widget (temporar) via tml3dRenderer.GetObject(). For the selection generation we have the model widget name e.g. “model-1” but   we do not have the component ID paths. To be able to construct a selection handle we need to construct the ID path of a component and then we need to check if it exist. This is some kind of graph search where we have an assembly with a components tree.  There the edges are the ids of the components. e.g. /0/0/1/1 , /0/0/1/2, /0/0/1/4, … etc. One possible algorithm is the deep first search:     To implement this I used the following javaScript code:   ///////////////////////////// var max_asm_depth=6; //this is the max depth in Creo Parametric var max_numb_comp_asm=25; /////////////////////////// ->deep first function check_comp_deep_first_recursively(target,path,arr) { //console.warn("called check_comp_deep_first_recursively(target="+target+",path="+path+")"); var selection = target+'-'+path var path_array = path.split('/') var depth = parseInt(path_array.length) var num = parseInt(path_array[depth -1]) var prev_num = parseInt(path_array[depth -2]) var prev_path = '' for (var i=1;i < depth -1;i++) {prev_path= prev_path +'/' + path_array[i]} if( check_for_valid_selection(selection) == 1) { arr[selection]=tml3dRenderer.GetObject(selection).GetWidget().GetLocation() if( (depth+1) < max_asm_depth) check_comp_deep_first_recursively(target, path + '/0', arr) else { if(num +1 < max_numb_comp_asm) check_comp_deep_first_recursively(target, prev_path + '/'+(num +1), arr)} } else { var right_num = num +1 if(right_num < max_numb_comp_asm) check_comp_deep_first_recursively(target, prev_path + '/'+right_num, arr) else if(!Number.isNaN(prev_num) ) {//console.log("--2") prev_path = '' for (var i=1;i < depth -2;i++) {prev_path = prev_path +'/' + path_array[i]} prev_path = prev_path +'/' + (prev_num +1) check_comp_deep_first_recursively(target, prev_path , arr) } } } ////////////////////////// ///call of the function: $scope.compJSON_loc_Data = new Object(); var target="model-1" check_comp_deep_first_recursively(target,'/0',$scope.compJSON_loc_Data) ...   The code above has the following weak spot - I need to give the maximum depth (max_asm_depth) and the maximum possible branches (max_numb_comp_asm)  The maximum depth currently in Creo assembly is 25 so that value which > 25 will not make a sense.  The value of  max_numb_comp_asm  in a flat assembly (only one level of depth) corresponds to the number of the components - the maximum number of branches on particular level of depth   The another possible algorithm is the breadth first search:     To implement this  I used the following JavaScript code:   ///////////////////////////// var max_asm_depth=6; //this is the max depth in Creo Parametric var max_numb_comp_asm=25; /////////////////function check_comp_at_level(target,num,depth,arr) // ->breadth first function check_comp_at_level(selection,num,depth,arr) { var position =''; // console.log("call check_comp_at_level =>"+selection); try{ // console.log("====== check here ==========="); //console.warn(tml3dRenderer.GetObject(selection).GetWidget().GetLocation()); var loc=tml3dRenderer.GetObject(selection).GetWidget().GetLocation() if( (loc.scale.x == 0) || (loc.scale.y == 0) || (loc.scale.z == 0) ) return 0; // the scale could not be zero //position= tml3dRenderer.GetObject(selection).GetWidget().GetLocation().position //console.warn(position); //arr[selection]=position arr[selection]=loc return arr[selection]; } catch (e) {console.error("failsed with error="+e); return 0;} } /////////////////////////// function check_comp_at_level_recursively(selection,depth,arr) { //console.warn("called check_comp_at_level_recursively("+selection+",depth="+depth+")"); var num =0; if(depth >= max_asm_depth) { //console.log("maximum depth of max_asm_depth ="+max_asm_depth+" reached"); return 0;} for (num=0;num < max_numb_comp_asm; num++) { var currentSelection =selection+'/'+num if(depth <0) return 0; var pos = check_comp_at_level(currentSelection,num,depth,arr) if(pos ==0 ) { continue;} else {check_comp_at_level_recursively(currentSelection,(depth+1),arr) } } //end of for } ////////////////////////// //////////////////////////////// function check_for_valid_selection(selection) { //console.log(" check_for_valid_selection =>"+selection); try{ var loc=tml3dRenderer.GetObject(selection).GetWidget().GetLocation() if( (loc.scale.x == 0) || (loc.scale.y == 0) || (loc.scale.z == 0) ) return 0; return 1; } catch (e) {console.error("failsed with error="+e); return 0;} } /////////////////////////// ///call of the function: $scope.compJSON_loc_Data = new Object(); var target="model-1" check_comp_at_level_recursively(target,'/0',$scope.compJSON_loc_Data) ...     The code for the breadth first search uses also the parameters for maximum depth (max_asm_depth) and the maximum possible branches (max_numb_comp_asm)  - so means it have the mentioned  restriction. If we set a value which is large this will increase the time until the search is completed so therefore depending of the particular assembly we need to set the both parameter properly ( we need to be able to scan the whole assembly but to minimize the search time) For different assemblies the first deep or first breadth could lead to better results. For example, for flat assembly structures the better approach will be to use the first breadth algorithm  But actually the performance is not so important here, because the search will be called one time and  then the json list should be saved.  With the current functionality we can read a file (json file ) from the project  upload directory , but it seems that it is  not  possible to save the information to a e.g. json file there (upload folder). To read a json file form the upload folder we can use some code like this:     target='model-1' $http.get('app/resources/Uploaded/' + jsonFile).success(function(data, status, headers, config) { $scope.compJSON_mod=data; // in this case the data is the received json object angular.forEach(data , function(color, path_id){ $scope.compJSON_Data[path_id] =position; console.log("target="+target+" --> $scope.compJSON_Data["+path_id+"] = "+$scope.compJSON_Data[path_id]); });//end of the error function ////////// finish for each path_id }) .error(function(data, status, headers, config) {console.log("problem in the http will create a new ");   When we want to save data  (the generated json list) we need to use another workaround - we can use a thingworx repository. Following functions /events could be used to save and receive an json object to/from a twx repository:   // the methods SaveJSON and LoadJSON // for the repository object should have //run permision for es-public-access user ////////////////////////////////////////////////////////////// $scope.SaveJsonToTwxRepository = function(path, content) { $scope.$applyAsync(function() { $rootScope.$broadcast('app.mdl.CAD-Files-Repository.svc.SaveJSON', {"content": content, "path":path} );} ,500 ); }; ////////////////////////////////////////////////////////////// $scope.GetJsonFromTwxRepository = function(path) { $scope.$applyAsync(function() { $rootScope.$broadcast('app.mdl.CAD-Files-Repository.svc.LoadJSON', {"path":path} );} ,500 ); $scope.app.speak("after call of GetJsonFromTwxRepository") //in the modelloaded listener register // LoadJSON-complete event -> to laod the data into session rootScope.$on('modelLoaded', function() { //// $scope.$root.$on('LoadJSON-complete', function(event, args) { console.log("LoadJSON-complete event"); $scope.COMP_LOCs=args.data console.log(JSON.stringify( $scope.COMP_LOCs)) }); /// });   In  the code above I use the 'modelloaded' listener to register LoadJSON-complete event . Because the service is called asyncronously- we need this event to load the data into session when it is received from thingworx. Here in this example the repository object is named "CAD-Files-Repository" The Thingworx services should have run permission and it is required to be added in the external data panel :     So when we start the project in PREVIEW mode we can call the search for the assembly structure and save it then  to thingworx. In Vuforia View mode   then we can receive the previously saved json object from thingworx. To check the current mode (if Preview or End Device)  we can use    if(twx.app.isPreview() == true) ...   it will  check if the current mode is preview mode or Vuforia View on the end device - here an example of the workflow:   if(twx.app.isPreview() == true) {// preview mode //calling breadth first - test check_comp_at_level_recursively(target+'-',0,$scope.compJSON_POS_Data) //console.warn($scope.compJSON_POS_Data) //calling deep first a second test and generating a data - locations check_comp_deep_first_recursively(target,'/0',$scope.compJSON_loc_Data) console.log("========================================") console.log("$scope.compJSON_POS_Data ->breadth first") console.log("========================================") console.log(JSON.stringify($scope.compJSON_POS_Data)) console.log("========================================") console.log("") console.log("") console.log("========================================") console.log("$scope.compJSON_loc_Data ->deep first") console.log("========================================") console.log(JSON.stringify($scope.compJSON_loc_Data)) $scope.SaveJsonToTwxRepository('/CADFiles/json_lists/compJSON_loc_Data.json',$scope.compJSON_loc_Data) $scope.GetJsonFromTwxRepository('/CADFiles/json_lists/compJSON_loc_Data.json') console.log("========================================") console.log("") } else { //here is the part on mobile device $scope.GetJsonFromTwxRepository('/CADFiles/json_lists/compJSON_loc_Data.json') }   I tested all points of  the described techniques above in a  test project which I want to provide here as zip file for the HoloLens (hideComponetsHoloLens .zip):     So to be able to test it you need to create in Thingworx a repository thing - means a thing which uses  the thing template "FileRepositroy" with the name "CAD-Files-Repository" and create a folder there "/CADFiles/json_lists/" (if you use another name and another folder (e.g. "/" no folder - the root repository folder) you have to adapt the javaScript code:   ... /CADFiles/json_lists/compJSON_loc_Data.json ... app.mdl.CAD-Files-Repository.svc.SaveJSON' ... app.mdl.CAD-Files-Repository.svc.LoadJSON'    
View full tip
  This example briefly describes how you can use the Step names that you used in Creo Illustrate sequence definitions to drive a corresponding step instruction/description in your experience. This is an unsupported, preliminary solution - R&D is working on a better, final solution. But as long as this is not available, you can use this one for PoC and demo purposes. To setup the scene: Here is what I meant with Step names that you used in Creo Illustrate: Now in Thingworx Studio you want to have the following result: The text is rendered with a simple Label widget. You'll have to remember the ID of this widget for the following javascript tweak. Add the following text to the Javascript section of your View: var labelId = "label-1"; // ID of the Label widget that displays the Step progress and description  text // this $on event handler switches the label based on the the sequence definition // the arg variable is of the following form: (<step #>/<total step #) <step name> $scope.$on('newStep', function(evt, arg) {   $scope.setWidgetProp( labelId, "text", arg); // get the currentStep from the arg }); Now you only need to provide the correct initial value in the Label widget text property and add control widgets (Buttons, Playback) to drive your animation and you're done. Easy!    
View full tip
You can get the view count for all public experiences by using the below REST API call.   GET <your ES URL>/compliance/views   This will return a JSON object with a views and billables field where: views = the download number of publicly accessible projects & billables = how many downloads are counted towards billable tokens   You can also specify the startDate and endDate in your request. These parameters can be defined by a UTC date string, JSON date string, or time in milliseconds since a specified date.   
View full tip
Often issues connecting to the Thingworx Experience Service from Thingworx Studio are related to more complex proxy configurations. One such configuration uses a .pac script that dynamically resolves the proxy based on the requested URL. Thingworx Studio has the ability to configure a proxy server but you have to explicitly specify one server URL, you can't replicate the settings in your internet connections when these use a .pac script. No worries - there is a workaround - Proxy-Vole at your rescue!   You can find Proxy Vole (https://proxy-vole.kenai.com/ ) on the Internet. It is a little java-based application that can be used to auto-resolve your proxy configuration. It has a command line and a UI frontend. The documentation is somewhat unstructured - for the test you only need a few of the steps: Download the Proxy Vole jar-with-dependencies Start the proxy vole application in a command shell using the following command: java -cp ./proxy-vole-1.0.1-jar-with-dependencies.jar com.github.markusbernhardt.proxy.ui.ProxyTester Enter the following in the dialog box: 4. Specify the resulting proxy URL in Thingworx Studio: That's it! If you still have issues, please post the log on the Developer Forum site.  
View full tip
With various Augmented Reality applications in PTC's product portfolio the technical aspects and use cases could leave you with some questions. Did you know that we do not only have a full blown Augmented Reality SDK but offer also the possibility for a easy to use integration with live sensor data coming in via ThingWorx?   This blog post hopefully clarifies some of the questions around what can be done with Vuforia SDK and Vuforia Studio.   Welcome to the real world   In the real world, or the "real reality" (sounds weird, but it's basically what you can see with your own eyes - no augmentation involved) there are various objects. These might look the same - or not. Just take the following example... that's what we perceive when looking at things around us:     These objects are recognized​ via shapes, contrasts (black & white) and whatever defines the actual form. ​Vuforia SDK ​is able to recognize those objects via it's built-in object recognition capabilities. However, there might be limits - depending on the use cases...   While buildings could be distinguished by their form, playing cards could be distinguished via their suits and nominations. The machines however, they all look the same, they probably all ​are​ the same.   Combining the real world with a virtual world   "Augmented Reality" will allow to enhance this physical object with virtual properties, e.g. overlay its CAD-Model or overlay some animations for a better gaming experience. Check out this video for the Genesis Augmented Reality Trading Card Game example.   Object Recognition allows to put actual names to what the (digital) eye can see:     Once the object is recognized and identified all kinds of virtual attributes can be added. Vuforia SDK allows to do this with e.g. Unity.   As all of the machines are basically the same... they look the same, come from the same manufacturer and behave the same, ​identification​ can only be done via a manual effort, e.g. selecting the actual machine manually within an app (via a menu etc.). This manual selection process will then map a generic form and shape of the machine to the actual physical machine you can see and touch just in front of you.   In an app this might be necessary if you can recognize the generic form of a playing card but forgot to implement the suit and nominations. In that case, either extend the recognition part, or choose a drop-down list when the card is identified to choose the actual​ card in front of you.   How do ThingMarks fit in?   Using the functionality of Vuforia SDK, Vuforia Studio combines the power of Vuforia (AR) with the power of ThingWorx (live sensor data / object information). In an industry environment I could select the correct machine I'm looking at. However, what's the identifier? It is probably written somewhere on the back of the machine with lots of other information, so I don't really know what to look for. Therefore I could be looking at any machine, but without the identifier I can not retrieve information for ​my​ machine.   Vuforia Studio uses ThingMarks​. They work similar to a QR-Code and allow for direct identification of individual machines. So instead of choosing manually in the app, the ThingMark automatically chooses the correct object and relates that ID to a Thing Entity in ThingWorx.     In above image, the ThingMark allows to a) identify we're looking at a machine and b) are looking at the specific machine A03 It's basic point and shoot. Scan the ThingMark with your mobile device and you're directly taken to this particular experience for this particular machine.   In this case, it's not the machine that defines our object's properties and shapes and contrasts and sizes etc. In this case, it's the ThingMark that's the object being recognized. That's quite a difference.   So now, in an additional step, we're using the power of Vuforia to identify individual machines by a ThingMark. Recognition is driven by the ThingMark's shape which includes an encoded object ID (the QR-code looking pattern).   How does ThingWorx fit in?   After recognizing the machine, ThingWorx studio provides the link between this specific object (or its instance) and the ThingWorx Thing Entity we've defined in Vuforia Studio.   This allows to retrieve individual properties, services, events, alerts etc. directly via ThingWorx. Those values are unique per object, not per shape!   So this allows to directly look at temperature, level and failure-indicator for the actual machine in front of us:     Bridging the gap   Vuforia Studio​ is used to bridge the gap between ​Vuforia ​and its Augmented Reality capabilites as well as ThingWorx ​and its Internet of Things (IoT) capabilities. Vuforia Studio uses parts of both applications, adds own functionality and defines its own product category: Connected Augmented Reality​     There are quite some components involved in this:     This can be split into two processes: developing and experiencing   Development   Create a new experience in VuforiaStudio, map the experience to the ThingMark ID, map the experience to a Thing Entity in ThingWorx. Publish the experience to the Experience Server. Done.   Experience   Scan the ThingMark with the Vuforia View app. Vuforia View will utilize Vuforia to recognize the ThingMark Vuforia View will load the data and the model(s) for this ThingMark from the Experience Server Vuforia View will automatically receive and update the experience you're viewing with live data from the ThingWorx platform Enjoy.   Resources   There are quite some videos, tutorial, best practices etc. available on how to develop and experience the world of Vuforia Studio. Check out ThingWorx Studio Resources: Getting Started Guides, Tutorials, Troubleshooting for the Article Hub and quite a lot of good stuff!   More information   To get more information visit the product pages at https://www.vuforia.com https://trial.studio.vuforia.com/   If you're looking for help, these might be of interest:   https://developer.vuforia.com/support for Vuforia SDK https://community.ptc.com/t5/Studio/bd-p/studio for Vuforia Studio https://community.ptc.com/t5/ThingWorx-Developers/bd-p/twxdevs for ThingWorx https://support.ptc.com/    What's next?   Get involved, create your own experience. It's fun, it's quite easy and well... it looks good, too!  
View full tip
With release 1.9.1, pilot and free trial participants can auto-configure Vuforia Studio to make it easier to get up and running quickly.  The auto-configure process does the following:​ Configures the sample projects included with your Vuforia Studio installation so that when you publish those projects they are published to your experience service and can be viewed in Vuforia View using one of your ThingMarks Retrieves the Experience Service (ES) URL - can find at Project -> Configuration -> Info section. We are no longer sending the ES url through welcome email. Downloads your ThingMarks and makes them available on the My ThingMarks page inside Vuforia Studio so that you can view your ThingMarks and print them out In order to complete the auto-configuration process, users are first required to authenticate using their PTC Account credentials.  For participants in the Vuforia Studio Free Trial, this does not introduce any confusion since they use their PTC Account credentials for everything: accessing the Studio Portal, publishing experiences from Vuforia Studio, downloading experiences to Vuforia View and working in ThingWorx Composer.   However, for participants in the Vuforia Studio Pilot Program, this may introduce some confusion.  Unlike free trial participants, pilot participants have two sets of credentials: PTC Account credentials used to access the Studio Portal and Auto-Configure Vuforia Studio Experience Service credentials provided in their Pilot Program Welcome Email that are used to publish experiences from Vuforia Studio, view experiences in Vuforia View and access ThingWorx Composer The auto-configuration process requires users to authenticate using their PTC Account credentials.  Since the auto-configuration process occurs inside Vuforia Studio and pilot participants do not normally use their PTC Account credentials inside Vuforia Studio, this may cause some confusion.   Note   The users that received access to an experience service instance before February 17, 2017 is a participant in the pilot program Any user that received access to an experience service on or after February 17, 2017 is a participant in the free trial.  
View full tip
If the experience project exists in Vuforia Studio Unpublish the project by hovering over the project and clicking the unpublish project Experiences icon . This action removes Experiences from the Experience Service. If the experience project does not exist in Vuforia Studio Using CURL Command Curl -u <username>:<password> -H "Content-Type: Application/JSON" -X "DELETE" https://<your-domain-name>/ExperienceService/content/projects/<projectname> username: Experience Service username password: Experience Service password your-domain-name: Experience Service domain projectname: Experience project name to be deleted Using REST call from Postman Select query method as 'DELETE' Enter the URL as https://​<your-domain-name>/ExperienceService/content/projects/<project-name> your-domain-name: Experience Service domain projectname: Experience project name to be deleted In Authorization menu Choose Authorization type as 'Basic Auth'. Add the user credentials and update request.  
View full tip
Just helped someone who was seeing a difference between the Preview mode and what Vuforia View was showing. The experience was being developed for public access, so what was happening is that Preview mode worked because they were using their Studio credentials to run the Thingworx service. When they used Vuforia View, they were using es-public-access to run the Thingworx service. And es-public-access did not have the runtime permissions to execute the service.   The error indication was found in the Application Log complaining about the Entity and Service not being able to be run.   Don't forget to set your permissions for es-public-access so it can run your services!  
View full tip
Your company might have a css that represents the corporate identity - or you may have other sources of reusable css styles that you want to include with minimal effort. Here is what you have to do to use corporate css files to drive the look and feel of your experiences: Add the corporate css file (e.g. company.css) to your resources In Application styles add the following at the beginning (before any other css style entry: @import url(#{$resources}/Uploaded/company.css);     With the following content in company.css: And this label definition:   Produces this outcome (you see it in the editor as well as in the preview):   Gotcha!    
View full tip
Mechanism  Concept in Vuforia Studio- How to make rotation more easy   When we try to rotate a model or 3d modelitem about a particular space axis it seldom will rotate about the correct axis as we want.  So, in this case we can try to solve this  when we use  some mathematical calculations. For example in the example(picture below)  -   door assembly we want to rotate  the door subassembly via the door hinges:     but  when we  try to rotate the door model about 60degree it rotates undesired on the wrong axis.      The question is:  Which is this axis and how to change it? The answer here is : When we have a PVZ model we cannot really change it!  We can use some mathematical relations to get the correct behavior . In this particular sample case the correct javaScript relation should be something like :   ... $scope.simple_door_slider_change = function (angle, door_length) {  var angle1=angle; var l_door=door_length;  var angle1_rad=angle1*Math.PI/180.0; $scope.view.wdg['modelItem-door-asm']['rx'] = angle1 ; $scope.view.wdg['modelItem-door-asm']['y']  = 0.0 - l_door*Math.sin(angle1_rad); $scope.view.wdg['modelItem-door-asm']['z']  = 0.0 - l_door*(1.0-Math.cos(angle1_rad)); $scope.app.params['door_angle']=  angle1; }; ...   So  calling $scope.simple_door_slider_change(70,0.950); will rotate  this particular door assembly on the correct place:       But what we can do to solve the problem  for more complex assemblies. For example when we want to rotate the door handle. Of course such calculation is no problem but this calculation will be more complex (containing movements and rotations ) and we need to invest significantly more  time for the creation of the mathematical concept of it. The main problem is that mostly we do not know what is the correct coordinate axis for each component.   Unfortunately, the only option, what we have here is to make some consideration already in the Creo Parametric design (or in another Cad tools) . So for example the following part have a default coordinate system. Here on the example picture is the name  PRT_CSYS_DEF.     When we later rotate about the x  in Vuforia Studio  then it will rotate about the X axis of the default csys here “PRT_CSYS_DEF”  So, this means when we have some component which should be later rotated in Vuforia, in this case  we need to pay attention already   in the design and try to  assemble the component were  the default csys is on the correct location.    The default coordinate system in a Creo Parametric model is created with the model and it is not possible to change it later (there is a workaround where we can use an auxiliary  assembly where we can insert the model. In this case we can move the model inside  the auxiliary assembly. The auxiliary  assembly will rotate about the default coordinate axis).   So, the next step is to consider, how to design a more complex mechanism assembly. Lets consider the following assembly:       When we create a project and then try to rotate  different components (arms) via slider then we will have e.g. the following situation:       So that the one (blue) component is rotated as desired but when we rotate the blue component the green component does not follow it. Let's create another version of the mechanism were we have the correct behavior:     What is there different?   The answer is that we used a different structure. Here we nested the moved component in further sub assemblies.       It is important that in this case for the modelItems widget defintion in vuforia studio we are not using only a parts  but also  assemblies. So here the subassembly arm 2 was used  for the definition of modelitem which contains the arm1  (part) which is an addtional modelitem.       So in this case we could change the rotation value of the axis and they rotates as desired.     
View full tip
In Vuforia studio the best way to interact with 3d model components is to define explicit 3d modelitems (widget modelItem). So this will be an easy way to access the componets and to change their properties e.g. setting of the color  e.g.: $scope.setWidgetProp("modelItem-1", "color",  "rgba(128,0,0,1)");   This will  change the modelItem-1 property color to brown – and will display the component which is specified by this modelItem with a  brown color. Another way to do this in javaScript is something like :   $scope.view.wdg['modelItem-1']['color'] = "rgba(128,0,0, 1);";//brown $scope.view.wdg['modelItem-1']['opacity'] = 0.5;//set transparency to 0.5 //or for the same $scope.setWidgetProp("modelItem-1", "color", "rgba(128,0,0,1);"); //brown $scope.setWidgetProp("modelItem-1", "opacity", 0.5); //set transparency to 0.5   But in some cases during the project development it  could  be helpful when we are  able to manipulate the components or request information about them without defining of explicit modelItem widgets. For example if we want to select a component to see some information about the component and change the color of it:   var PICK_COLOR = "rgba(255,0,0,1)"; ... $timeout( //timeout block 1 function() { //timeout function 1 angular.forEach( //==== for each 3d model block // this will call the function below for each 3d model $element.find('twx-dt-model'), function(value, key) { //for each 3d model block function //===================================================================================== angular.element(value).scope().$on('userpick',function(event,target,parent,edata) { // start the $on() listener 'userpick' + function definition //================================================================================= var pathid = JSON.parse(edata).occurrence; $scope.currentSelection = target + "-" + pathid; // create a component selection e.g. "model-1-/0/0/3/2" console.log("twx.app.isPreview() ="+twx.app.isPreview() ); //print an info if is called in preview mode and could be checked if required try{tml3dRenderer.setColor($scope.currentSelection, PICK_COLOR);} catch (ex) {console.warn("Exception 1 in tml3dRenderer.setColor()=>"+ex);} //will set the color of the selected component } //end of mobile device modelItemsList.push($scope.currentSelection); } //end is in array //================================================================================= }); // finish the $on() listener 'userpick' + function definition } //finish for each 3d model block function ); // finish for each 3d model block //================================================================================= } ,50); // finishtimeout block 1 and function   If  we use  PICK_COLOR  = "rgba(255,0,0,0)"; It means that this color (red) is set for a selected component. Here the one additional detail is the last argument - which have a value of 0. Means alpha channel =0 - or full transparence. On the most mobile devices it will hide the selected component, but this is not supported techniques and we have to use always color with alpha channel >0. / transparent but still visible/   Calling of the tml3dRenderer.setColor(…, undefined); will set the component color back to default - example:   tml3dRenderer.setColor(‘model-1-/0/0/3/2’, undefined);    Another important point is that when we know the model name and know the component ids, in this case we can also set the color or hide components without explicit definition of model items. For example for a particular model we have prepared  a json file (*):     { "/0/0/2" :"rgba(255,0,0,1);", "/0/0/0" :"rgba(128,0,0,1);", "/0/0/5" :"rgba(128,0,128,1);", "/0/0/3/0":"rgba(0,255,0,1);", "/0/0/6" :"rgba(255,200,0,1);", "/0/0/3/1":"rgba(0,0,0,0.2);", "/0/0/7/0":"rgba(0,0,0,0.2);", "/0/0/7/1":"rgba(0,0,0,0.2);" }   The model to which this json file was created is placed in Vuforia Studio as model widget with name=model-1  We can then read this json file (from prject->src\phone\resource\Uploaded folder) with some javaScript construct like (below) and set the color property of the components (also change the transparence - for the components with alpha channel =0.2)  Here an example (*):   //======================================================== // reading a json file with component setting for the components //======================================================== $scope.setCompProps=function() { var FILES_MODEL_COMP = { 'model-1':'comp_info.json' }; $scope.compJSON_Data = new Array(); angular.forEach(FILES_MODEL_COMP, function(jsonFile, target) { console.log("angular.forEach jsonFile = "+jsonFile + ", target="+target); $http.get('app/resources/Uploaded/' + jsonFile).success(function(data, status, headers, config) { $scope.compJSON_Data[target]=data; // in this case is $scope.compJSON_Data['model-1']= of the json structure file here the content'comp_info.json'; angular.forEach(data , function(color, path_id){ console.log("target="+target+" --> color = "+color + ",path_id="+path_id); tml3dRenderer.setColor(target+'-'+path_id, color); });//end for each function }) .error(function(data, status, headers, config) {console.log("calling in foreach 1 failed"); }); }); };     So when we start for this particular model the test code it will change the display of the model according to the setting of the comp_info.json  file:     The code above is more than intended for setting colors and transparency  . According a recommendation from development for hiding of components is better to use  the hidden property:   tml3dRenderer.setProperties($scope.currentSelection, { hidden:true } );   The sample  code below  ( more simplified) is  for the case that we want to blank a component by click on it:   angular.forEach($element.find('twx-dt-model'), function(value, key) { // search all twx-td-model's -> means all model widgets angular.element(value).scope().$on('userpick',function(event,target,parent,edata) { //for each model widget will set a userpick listener try{ console.log('edata');console.warn(edata); console.log("JSON.parse(edata)");console.warn(JSON.parse(edata)); var pathid = JSON.parse(edata).occurrence; $scope.currentSelection = target + "-" + pathid; console.log("=>>"+$scope.currentSelection); } catch (ea) {console.error("not twx-model is clicked but still fired")} try{ // here below the change recommended from R&D tml3dRenderer.setProperties($scope.currentSelection, { hidden:true } ); } catch (e1234) {console.error( "e="+e1234); }   Here tested the code on the HoloLens 1.0 device:     When we have a color definiton  with  opacity -> the alpha channel set here e.g. to 0.1 /  and the defined color should be changed later :   var PICK_COLOR_OPACITY1 = "rgba(,,,0.1)";   to change the rgba expression by setting another value of transparency you can use some construct like this:   var PICK_COLOR_OPACITY1 = "rgba(,,,0.1)"; var OPACITY_VAL=0.3; var PICK_COLOR_OPACITY2= PICK_COLOR_OPACITY1.replace( "0.1)",OPACITY_VAL+")");   The JavaScript code above  will set transperancy value of  0.3 (replacing the 0.1 by 0.3) But for the case that we have in a json file a defintion of color with alpha chanel =0  :     ... "/0/0/3/1":"rgba(0,0,0,0.0);", ...   In this case we can  modify (recommended)  the code to check the value of the alpha channel and if it is ==0 to set  the "hidden" property  - example (*) :   ... //======================================================== // reading a json file with component setting for the components //======================================================== $scope.setCompProps=function() { var FILES_MODEL_COMP = { 'model-1':'comp_info.json' }; $scope.compJSON_Data = new Array(); angular.forEach(FILES_MODEL_COMP, function(jsonFile, target) { console.log("angular.forEach jsonFile = "+jsonFile + ", target="+target); $http.get('app/resources/Uploaded/' + jsonFile).success(function(data, status, headers, config) { $scope.compJSON_Data[target]=data; // in this case is $scope.compJSON_Data['model-1']= of the json structure file here the content'comp_info.json'; //because R&D statement to use hidde property need to check of alpha chanel ==0 angular.forEach(data , function(color, path_id){ console.log("target="+target+" --> color = "+color + ",path_id="+path_id); var start_alpha = color.lastIndexOf(","); var end_alpha = color.lastIndexOf(")"); var alpha_str = color.substring(start_alpha+1, end_alpha); var num = Number(alpha_str); if ( (isNaN(num )) || (num <= 0.0) ) {//set color properly to alpha channel 1.0 var new_color= color.substring(0,start_alpha+1)+"1.0"+ color.substring(end_alpha,color.length) tml3dRenderer.setColor(target+'-'+path_id, new_color); tml3dRenderer.setProperties(target+'-'+path_id, { hidden:true } ); } else { // color unchanged tml3dRenderer.setColor(target+'-'+path_id, color); } });//end for each function }) .error(function(data, status, headers, config) {console.log("calling in foreach 1 failed"); }); }); }; ///////////// ...   The example above will set to the component the correct values of the color but with alpha channel 1.0 and will interpret the original alpha value from the file as setting of the hidden property to true.  Does this make sense? Yes if we later set the hidden property to false then the color setting will be applied according to the definition from  the json file
View full tip
In this particular cases we have some sensors/devices which could be accessed via WLAN/ Web  and also  we need to scan /request the values of these sensors via rest API calls. For example from javascript code for simple REST API request the code   should looks like (used a test web page which provides demo response) :   //this code will work fetch('https://jsonplaceholder.typicode.com/todos/6') .then(response => response.json()) .then(json => {console.log(json); }) .catch(error =>{ console.error(error);}) };   ... but the same code will not work for http url   fetch('http://ip.jsontest.com/') .then(response => response.json()) .then(json => {console.log(json); }) .catch(error =>{ console.error(error);}) };   When I tested it - my observation was that https and http requests will work in Studio in preview mode.  But only the https request will work on both Android and IOS devices. The http fetch request will not work ...   This means trying to design a solution which will call javaScript on the Vuforia view where we will try to read data will not work / or at least  will not work  stable. Therefore,  a better  way  is  /also it is the supported way /- to get (to bind)  the sensors data via the External DATA panel:     To achieve this goal , we need: we need first to create a Thing with properties which could be displayed in the experience project. The next step is to read the sensors and update the properties. In case that we can see the sensors URLs from the thingworks instance / in this case we can use a thingworks service called by  a timer. The time  will call the service  in  particular interval  , so that the  service will  read then the  data from the sensors.     In the picuture above we need to define a service which will call a rest API to read the sensors. Here in the example to simulate the call we will read a timestamp from a postman-echo service. As the name say's it will return exact the same values what  was  send to it (but with different format - as JSON object) . So for example when we call in a web browser the following link:   http://postman-echo.com/time/object?timestamp=2018-6-9:8:8:4   this will return the following json object:   {"years":2018,"months":5,"date":1,"hours":9,"minutes":8,"seconds":8,"milliseconds":4}   In this  example we will create a service "testGetValue() which will call the echo service and will return the json respose as an InfoTable as output )   //URL_STRING="http://postman-echo.com/time/object?timestamp=2018-6-9:8:8:4" var year= 2010 +Math.floor((Math.random() * 10) + 1);//2011...2020 var month= Math.floor((Math.random() * 8) + 1);//1-9 var day= Math.floor((Math.random() * 18) + 10);//10-28 var hour= Math.floor((Math.random() * 24) );//1-24 var minute= Math.floor((Math.random() * 60) );//0-59 var second= Math.floor((Math.random() * 60) );//0-59 var msecond= Math.floor((Math.random() * 1000) );//0-999 //these values are only here specific to the web side not to have an error //calling the rest API var URL_STRING="http://postman-echo.com/time/object?timestamp="+year+"-0"+ month+"-"+day+":"+hour+":"+minute+":"+second+":"+msecond; var params = { proxyScheme: undefined /* STRING */, headers: undefined /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: undefined /* STRING */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: "postman-echo.com" /* STRING */, username: undefined /* STRING */ }; params.url=URL_STRING; // result: JSON var json = Resources["ContentLoaderFunctions"].GetJSON(params); //var json_string= JSON.stringify(json); //var new_json = JSON.parse(json_string); var params1 = { infoTableName: "InfoTable", dataShapeName : "InoTableDataShape_Time1" }; var infotabletest = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); infotabletest.AddRow({years:json.years, months:json.months, date:json.date, hours:json.hours, minutes:json.minutes, seconds:json.seconds, milliseconds:json.milliseconds}); var result = infotabletest; //set now the value to the properties me.years=parseInt((infotabletest).getFirstRow().getValue('years')).toString(); me.months=parseInt((infotabletest).getFirstRow().getValue('months')).toString(); me.date=parseInt((infotabletest).getFirstRow().getValue('date')).toString(); me.hours=parseInt((infotabletest).getFirstRow().getValue('hours')).toString(); me.minutes=parseInt((infotabletest).getFirstRow().getValue('minutes')).toString(); me.seconds=parseInt((infotabletest).getFirstRow().getValue('seconds')).toString(); me.milliseconds=parseInt((infotabletest).getFirstRow().getValue('milliseconds')).toString(); //=================================================================   To be able to convert the json object to an infotable we need to define a datashape with the same fields -> corresponding to the json elements (here e.g  InoTableDataShape_Time1):         So every time when we call this service it will call the postman-echo web.side with some random data and will set the values of properties based on the received data  from the request. In this case the request returns the sent data (makes no really sense) but here is only important to demonstrate the principle how to call it. This should demonstrate how to request values from some edge devices (measurments) via REST API calls - supposing that the edge device supports  REST API call. (For example we can setup some Arduinos, Raspberry ,  ESP8266, etc...  as Web Service supporting REST API calls for reading of measurment values) Now we need to create a timer object  which will call call the service  for  an particular interval (here 1 sec /1000msec)  -> the used  service is here testGetValues() according the definition above.     this will update the values of the property and we can see the updated property  in Vuforia Studio.       But often the sensors URLs are not visible for the thingworx instance. In this case we can try to read the values of the sensors in the local network (some kind of intermediate service)  and then send the values to the thing properties using one of the methods described in the PTC guide “Choose a Connectivity Method ->Guidelines for selecting the optimal method for connecting to ThingWorx.” https://developer.thingworx.com/en/resources/guides/choosing-connectivity-method An  example for one alternative way you can fine in "Node.js Rest API example  how to display data from the local network in Vuforia Studio project?"  
View full tip
In this article  we have the same start point/state as described in “How to read sensors via Rest API call in and display it Vuforia Studio experience project?”… but with one significant difference ->the sensors URLs are not visible for the Thingworx service. The problem is that the sensors values should be requested via Rest API calls in a local intranet. This means that the end devices are connected to a local router and have IP valid only in the local WLAN. Othersides the router   have also internet access. The end devices could connect to  the Experience Server and could download e.g.  the experience. The sensor URL and rest API call should be some thing like:   var url="http://172.16.40.43.5900/api/v0/dev_id=6&size_id=123";   So, it means the IP address of the device, where the value should be requested via Rest API calls is not visible from outside of the local WLAN and the Rest API call could done only inside the local network. So here we can use a node.js program (service)  which will request the sensors and will send the values to Thingworx. So the main loop is an interval callback function “requestFunction” which is called - here in example  every 5 seconds. It will read the sensors data via Rest API fetch call . In this example the data is called from  a local test web server (it simulates an edge device) . For the test I used 2 server URLs  wich require parametrs 1.) http://127.0.0.1:8081/userId=8 here the the user_id is random value 1...10 and  the resonse returns a json object  with some properties 2.)http://127.0.0.1:8081/api/todos?id=122 here the the id is random value 1...200 and the response  returns also a  json object  with some properties   var http = require('http') var https = require('https') const fetch = require('node-fetch') var request = require("request") var userId = 1 //could be from 1 to 10 var todosId = 1 //could be 1 -200 function requestFunction() { userId = Math.floor((Math.random() * 10) + 1) todosId = Math.floor((Math.random() * 200) + 1) fetch('http://127.0.0.1:8081/userId/' + userId) .then(response => response.json()) .then(json => { console.log(JSON.stringify(json)) setPropValue("profession", json["profession"]) setPropValue("userName", json["name"]) setPropValue("userId", json["id"]) setPropValue("userPassword", json["password"]) }) fetch('http://127.0.0.1:8081/api/todos?id=' + todosId) .then(response => response.json()) .then(json => { console.log(JSON.stringify(json)) setPropValue("message", json["title"]) }) } // ============================================== setInterval(requestFunction, 5000) //every 5 sec   If we need information about what  the  syntax of the Rest API is  to   set/ change the value of the thing property - for this  we can  check  the  REST API Reference Guide: https://developer.thingworx.com/en/resources/guides/rest-api-how-guide Property values access: https://developer.thingworx.com/en/resources/guides/rest-api-how-guide/property-values-rest-api-how    When we review the code above we can see that there is function “setPropValue” which should set a value to a particular property. Here the twx server:port is mxxxxx7o.studio-trial.thingworx.io:8443. The Thingname is  “REST_API_EDGE”   function setPropValue(propName, propValue) { var options = { method: 'PUT', url: 'https://mxxxxx7o.studio-trial.thingworx.io:8443/Thingworx/Things/REST_API_EDGE/Properties/PROPNAME', headers: { // use here the user appKey who created the Thing /here REST_API_EDGE appKey: 'fxxx7x4a-19x4-4xx3-bxxxa-9978a8xxxx17x', //appkey for the user 'Content-Type': 'application/json' }, body: { PROPNAME: 'XXXXXXX' }, json: true }; //this will make a string from the option json and will replace the // place holder “PROPNAME” by function argument propName var str_temp = JSON.stringify(options).replace(/PROPNAME/g, propName) //this will replace place placeholder XXXXXXX by function argument propVaule // and will convert the string back to json options = JSON.parse(str_temp.replace(/XXXXXXX/g, propValue)) console.log("option in setPropValue:") console.warn(options) request(options, function(error, response, body) { //print the return code – success is 200 console.log("response.statusCode=" + response.statusCode) if (error) { console.log("error in request"); throw new Error(error); } console.log("response") }); } // =================================================   The code was generated with the REST API client POSTMAN. We can use this tool to test some Rest API calls in the POSTMAN GUI , where we could use some more confortable functionality for testing and debugging . When the call is working in the POSTMAN UI we can export it to different programming formats (javaScript, nodeJs etc. - means it  will  generate here a  javaScript code for postprocessing. When we start the script (above) we can verify that the property values will change every 5 seconds.     The best way now to bind the data in Vuforia studio is via the External DATA panel     Afterwards  we can test in the Preview and later on the end device:    
View full tip
Announcements