cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get called away in the middle of writing a post? Don't worry you can find your unfinished post later in the Drafts section of your profile page. X

Tracking a Very Simple Part

JoeKellyATI
10-Marble

Tracking a Very Simple Part

I'm looking at using AR/Vuforia Studio to aid in the inspection of some metal plates.  The idea is the AR device (HoloLens in the future, mobile for now) will instruct/point where to take different measurements.

 

These plates are about 500mm x 135mm x 8mm in dimensions.  I'm having trouble getting a good lock on tracking the model with my iPhone and Vuforia View.

 

I have a few questions:

  1. Is there a way to optimize how I can track a very simple part like these metal plates?
  2. Will the HoloLens be more accurate and could it solve these tracking problems?
  3. Is it possible to live track the object if the operator picks up and moves the metal plate (i.e. do the pointing arrows follow the part as it's moving)?
ACCEPTED SOLUTION

Accepted Solutions

Hello @JoeKellyATI,

 

regrading points 1.) and 2.) form your post - for an  eyewear Vuforia Studio project we do not have any  extra setting to improve the behavior. 

But we can improve the behavior when we take some steps to prepare the used model data. Here I want to refer to  an information from the Engine Model Generated user guide (https://library.vuforia.com/articles/Solution/model-target-generator-user-guide.html )  Indeed this is a guide for Engine but the most recommendation has relevance also for Studio , because the same technology is behind the functionality.

Model Preparation

For a detailed overview of recommendations and best practices, see Model Targets Supported Objects & CAD Model Best Practices.

When preparing the 3D model for the MTG, you will need to check that the scale of the digital and the physical object matches. Attempting to track a toy replica with the 3D model of the full-size object can fail in some instances. In order to be sure to get the best tracking quality, the sizes of the models and objects should match. See Best Practices for Scaling Model Targets for more information on this topic.

If the model has more than 400,000 polygons or more than 20 parts, the model will need to be simplified. The process of simplification consists of reducing the number of polygons needed to represent the object as a mesh. Simplification is required for the computer vision algorithms to run on mobile devices in real-time; polygon reduction does not impact the detection and tracking accuracy, as long as it is not too coarse.  

Any simplification tool will introduce some artefacts. Artefacts with a reduction range of 1:10 generally do not impact the computer vision algorithms. For example, reducing a mesh corresponding to the 3D model of a whole car from 500,000 polygons to 50,000 polygons produced a significantly reduced database, and still achieved good detection and tracking performance.

Modification

In some cases, the 3D model of an object can contain parts that are not on the object being tracked (or not on all instances), such as an optional extra component that you can specify when ordering the object - extra footrests or a passenger seat on a motorbike, for example. Ideally, the 3D model used for tracking should not contain this part.

In addition, parts that can easily move from their position in the 3D model (e.g. a steering wheel that can be rotated or adjusted to fit the driver) can interfere with tracking. Removing it from the 3D model can improve tracking quality since it decreases the disparities between the 3D model and the real object.

Internal parts that are usually contained in a CAD model - but cannot be seen from the outside of the object when trying to initialize tracking, shall also be removed. These increase the size of the device database during storage in your app, and the polygon count at run-time to deal with. Remove these to further increase the performance during detecting and tracking a Model Target.

 

Regarding to the point 3.)  Frist is the question is  - what is the model data that you want to use in  your augmentation and also for  model Targets? Does your application contain only one simple part model (the plate with the mentioned dimension ) with simple geometry. How do you want to scan it – via model Target?  And what should be the augmented geometry – means does your project  contains only this plate as augmentation . So means for different plates / simple parts  you used different project and e.g. for particular plates with specific dimension you will call extra Vuforia Studio project.

When you scan such simple part -  in this case you have to look on the part (model target) during the scan process always form the same direction  and I think this will not make so much sense. Then may be is better to try to use some feature 360 model target (coming soon in Studio or available in Vuforia Engine)

Another  way is to use for your augmentation   an assembly which contain different components – different parts (e.g. plates). Here you will scan the reference target (for the assembly one  global  model target, or thingmark, or image target) to set the global coordinate system for the assembly.  The components – here e.g. the different parts (e.g. plates ) could be then defined as modelItes. This will make possible that you can select them by tap on it (modelItem click event) and start some actions.  To move the parts via drag and drop will be difficult to implement. You could track the eye vector , the up direction and the coordinate of the eye position regarding to the global coordinate system. So also, the component positions should be also known. In this case you can take some actions to move the component but this will be difficult to implement – because we cannot track yet e.g. the position of the fingers  - so to define the movement. Possible scenario the part could stay hanging in front of you on particular distance and when you move your device you will move the part – but  as already mentioned this is general possible but  will be difficult to be implemented.

Of course, there is also some more simple approach possible. You can display some arrows (additional model) which give advice in which direction you will move the part. You can select the direction by clicking on the correct arrow part and specify the distance for the translation (input could be done e.g. as shown in the posts : https://community.ptc.com/t5/Vuforia-Studio-and-Chalk-Tech/How-to-create-a-custom-button-pressable-on-HoloLens-device/ta-p/654984  and https://community.ptc.com/t5/Vuforia-Studio-and-Chalk-Tech/How-can-we-make-a-3D-Widget-on-HoloLens-visible-in-front-of-me/ta-p/658541 ) The techniques in the second mentioned post – described how to move a 3d panel (but it could be also a part) in front of your device in specific distance and fix the position here in e.g. saying “show UI”

View solution in original post

2 REPLIES 2

Hello @JoeKellyATI,

 

regrading points 1.) and 2.) form your post - for an  eyewear Vuforia Studio project we do not have any  extra setting to improve the behavior. 

But we can improve the behavior when we take some steps to prepare the used model data. Here I want to refer to  an information from the Engine Model Generated user guide (https://library.vuforia.com/articles/Solution/model-target-generator-user-guide.html )  Indeed this is a guide for Engine but the most recommendation has relevance also for Studio , because the same technology is behind the functionality.

Model Preparation

For a detailed overview of recommendations and best practices, see Model Targets Supported Objects & CAD Model Best Practices.

When preparing the 3D model for the MTG, you will need to check that the scale of the digital and the physical object matches. Attempting to track a toy replica with the 3D model of the full-size object can fail in some instances. In order to be sure to get the best tracking quality, the sizes of the models and objects should match. See Best Practices for Scaling Model Targets for more information on this topic.

If the model has more than 400,000 polygons or more than 20 parts, the model will need to be simplified. The process of simplification consists of reducing the number of polygons needed to represent the object as a mesh. Simplification is required for the computer vision algorithms to run on mobile devices in real-time; polygon reduction does not impact the detection and tracking accuracy, as long as it is not too coarse.  

Any simplification tool will introduce some artefacts. Artefacts with a reduction range of 1:10 generally do not impact the computer vision algorithms. For example, reducing a mesh corresponding to the 3D model of a whole car from 500,000 polygons to 50,000 polygons produced a significantly reduced database, and still achieved good detection and tracking performance.

Modification

In some cases, the 3D model of an object can contain parts that are not on the object being tracked (or not on all instances), such as an optional extra component that you can specify when ordering the object - extra footrests or a passenger seat on a motorbike, for example. Ideally, the 3D model used for tracking should not contain this part.

In addition, parts that can easily move from their position in the 3D model (e.g. a steering wheel that can be rotated or adjusted to fit the driver) can interfere with tracking. Removing it from the 3D model can improve tracking quality since it decreases the disparities between the 3D model and the real object.

Internal parts that are usually contained in a CAD model - but cannot be seen from the outside of the object when trying to initialize tracking, shall also be removed. These increase the size of the device database during storage in your app, and the polygon count at run-time to deal with. Remove these to further increase the performance during detecting and tracking a Model Target.

 

Regarding to the point 3.)  Frist is the question is  - what is the model data that you want to use in  your augmentation and also for  model Targets? Does your application contain only one simple part model (the plate with the mentioned dimension ) with simple geometry. How do you want to scan it – via model Target?  And what should be the augmented geometry – means does your project  contains only this plate as augmentation . So means for different plates / simple parts  you used different project and e.g. for particular plates with specific dimension you will call extra Vuforia Studio project.

When you scan such simple part -  in this case you have to look on the part (model target) during the scan process always form the same direction  and I think this will not make so much sense. Then may be is better to try to use some feature 360 model target (coming soon in Studio or available in Vuforia Engine)

Another  way is to use for your augmentation   an assembly which contain different components – different parts (e.g. plates). Here you will scan the reference target (for the assembly one  global  model target, or thingmark, or image target) to set the global coordinate system for the assembly.  The components – here e.g. the different parts (e.g. plates ) could be then defined as modelItes. This will make possible that you can select them by tap on it (modelItem click event) and start some actions.  To move the parts via drag and drop will be difficult to implement. You could track the eye vector , the up direction and the coordinate of the eye position regarding to the global coordinate system. So also, the component positions should be also known. In this case you can take some actions to move the component but this will be difficult to implement – because we cannot track yet e.g. the position of the fingers  - so to define the movement. Possible scenario the part could stay hanging in front of you on particular distance and when you move your device you will move the part – but  as already mentioned this is general possible but  will be difficult to be implemented.

Of course, there is also some more simple approach possible. You can display some arrows (additional model) which give advice in which direction you will move the part. You can select the direction by clicking on the correct arrow part and specify the distance for the translation (input could be done e.g. as shown in the posts : https://community.ptc.com/t5/Vuforia-Studio-and-Chalk-Tech/How-to-create-a-custom-button-pressable-on-HoloLens-device/ta-p/654984  and https://community.ptc.com/t5/Vuforia-Studio-and-Chalk-Tech/How-can-we-make-a-3D-Widget-on-HoloLens-visible-in-front-of-me/ta-p/658541 ) The techniques in the second mentioned post – described how to move a 3d panel (but it could be also a part) in front of your device in specific distance and fix the position here in e.g. saying “show UI”

Roland,

 

Thank you very much for the detailed answer!  It's a lot to take in right now.  I just purchased a HoloLens and am planning to work with it in the next couple of weeks, so I will take a look at some of the best practices you have linked and we will see how things go.

Announcements

Top Tags