Community Tip - Have a PTC product question you need answered fast? Chances are someone has asked it before. Learn about the community search. X
Hello,
I want to add a marker on the surface of a 3D model. I need the xyz coordinates of the point I tapped on the surface of the model.
I found several sample codes using raycast functions, but they all point to the selected object.
Any idea on how to get the actual coordinates?
Hi @acroce ,
if I understand correct your question you are looking for functionality to get the coordinates of a tap point on a model surface - required are the X,Y or Z coordinates of this point (coordinate regarding to the model widget 0,0,0), right
So far I know there is no currently such functionality, but I requested the opinion of PTC R&D in internal forum so, may be some one from PTC R&D team have some idea.
What I see currently as possible solution / workaround is to use the reverse way - to have some item where we could change x,y and z and try to reach the desired positon - something like:
1.) create a modelItem a component which is a small ball and careate binding to a sliders which will changing the X,Y and Z coordinate of this component so we will try to set the position of the ball modelitem near to the desired position
2.) create a lot of small modelIitems as some kind of grid which we can click to identify a particular point
I received from PTC R&D the following confirmation:
//=================================================
There is no way to get that coordinate xyz value at this time.
You should NOT create lots of model items as that will kill performance.
Using sliders to find a location is also not a great idea - terrible UX, and not advisable/productive use of Studio experience.
if you are looking to identify a point in space, use an image target attached to a wand (stick) and use a second target as your reference point. This is technically possible in View, but Studio prevents you from declaring multiple targets. You can however use View directly (you would have to program in html/javascript) or this is very easy to do using Vuforia Engine.
After R&D confirmed that my first ideas are not so good, have an additional idea to use some virtual arrow which is attach on your device and you will try to touch with the pick the desired point of the surface when you move the device around in space. For example, you can set different length of this arrow for different distances to the object.
So, when you think you have the correct location then e.g. confirm with a button, or other kind of command.
The idea how to implement it / no sure if this work and requires some tests/ is to use the ml3dRenderer.setupTrackingEventsCommand() callback function
This function will deliver the vectors for the current eye position, the normal view vector and the up vector. So you can find out what is your current position in regards of the world coordinates system and then calculate the position of the arrow pick. You need also to consider the position of the your model widget if it is not at 0,0,0
An example of the usage of the mention callback function is here:
////////////////////////////////////
$rootScope.$on('modelLoaded', function() {
$scope.check_if_works=false
//=====================
// set some properties
//the next prop is important that the tracking event will use the callback
$scope.setWidgetProp('3DContainer-1','enabletrackingevents',true);
$scope.setWidgetProp('3DContainer-1','dropshadow',true);
$scope.setWidgetProp('3DContainer-1','extendedtracking',true);
$scope.setWidgetProp('3DContainer-1','persistmap',true);
console.log("now check again the setting of the envronment")
console.warn($scope.app.view.Home.wdg['3DContainer-1'].enabletrackingevents)
var enabletrackingevets= $scope.app.view.Home.wdg['3DContainer-1'].enabletrackingevents
if( enabletrackingevets) $scope.setWidgetProp('3DLabel-2', 'text', "3Dcontainer.enabletrackingevents=true");
//====================
$timeout($scope.setMyEYEtrack(), 2500) //call with delay 2.5 sec
});
/////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
$scope.setMyEYEtrack= function() {
if(tml3dRenderer)
{
try {
tml3dRenderer.setupTrackingEventsCommand (function(target,eyepos,eyedir,eyeup) {
if (!$scope.check_if_works) //check if setupTrackingEvents is working fine -at least one time called
{
$scope.check_if_works=false
//this below make sense only for testing
$scope.setWidgetProp('3DLabel-1', 'text', "callback setupTrackingEventsCommand is OK");
}
var scale=2.0; //distance to eye
//moved the widget in regards of the new viewpoint position of the device
$scope.setWidgetProp('3DImage-1', 'x', ( eyepos[0]+eyedir[0]*scale));
$scope.setWidgetProp('3DImage-1', 'y', ( eyepos[1]+eyedir[1]*scale));
$scope.setWidgetProp('3DImage-1', 'z', ( eyepos[2]+eyedir[2]*scale));
$scope.$applyAsync();
},undefined) } catch (e) { $scope.setWidgetProp('3DLabel-1', 'text', "exception=");}
}else
$scope.setWidgetProp('3DLabel-1', 'text', "null tml3dRenderer object on HoloLens");
} ////// finish setEYEtrack
Hello Acroce,
With OpenGL and a custom program (in C or C++), the solution is to do a raycast from center of screen rendered by the camera to the center of 3D model.
On this line, the first collision with 3D surface of the 3D model will give you the point.
It is needed to transform it to have it in world coordinates.
I have found this article who explains the math behind with Matrix, Vectors ...etc...
http://antongerdelan.net/opengl/raycasting.html
By the way, Vuforia Studio is using OpenGL SE but I don't know how to access to the API with Javascript.
Best regards,
Samuel
Hi @sdidier ,
Thanks for the attached link. Looks cool…
I do not think that this is a question if this is possible. Yes a lot of tool could do this. Every CAD and CAD viewer tool supports measurement on the surface points.
I think here is the question is if there is a supported functionality what we can use in Vuforia Studio.
I did already contacted R&D in internal forum and it was confirmed that this is not available yet. What we currently could do are only some attempts for workarounds.
I think we can solve in Vuforia Engine the issue without any problems. but there we will have a problem with the import of the pvz file which is not supported and need some external tools . Also additional significant programming work is required
Hello Roland,
Yes it is possible to do that in Vuforia Engine with Unity3D for example.
A raycast function exists in this 3D engine :
https://docs.unity3d.com/ScriptReference/Physics.Raycast.html
But as you said, we don't pve file format support.
It is needed to convert in another format.
Best regards,
Samuel
Thank you all very much for you replies,
Too bad there is no native function yet for this application.
I will propose vuforia engine for this application.
Can you confirm that with Engine this is feasible?
Yes , according to the available functionality in Engine the implementation of such task is in general feasible. This was confirmed also by the PTC dev team.
But it depends on the details...
To use Engine could have some advantage but also some disadvantage… which you need to discover. So it means it is more advance but lower lever API comparing to Studio , so you need to invest more programming work to implement some features which already exist in Studio. It is another tool – e.g. Unity which is specific environment which requires some time learning and also the programming in Sharp C##, but it is a very cool tool and so far I think it is worth to invest time to learn it. In this case you could use some algorithm to interact with geometry data as already mention by @sdidier
But what I think the mean problem is the handling of the CAD data. It depends on it where you data is created. If it is a PVZ format , in this case you could not used it directly but you need to convert it to a format which could be used in Unity e.g. obj or FBX file formats. You will be not able to use sequences created in Creo Illustrate
Another point is that in Untty app all data which you have to use , are already provided in the application. In some case this could be good but it could be also a disadvantage , because you will not have to use the option to load an project on demand when you read e.g. a thingmark. But of course there could be some options.to combine the both tool, but I do not think that this will be trivial.
Also for Engine task you need to use the following links (in the table below) where I will suggest to request further support if you need desistance for the Engine functionality:
Product |
Knowledge Base |
Community |
Commercial Support |