I am working on a presentation where I have multiple components placed apart from each other, some at a distance of 0.5m. Using a thingmark would prove difficult, since the tracking will be constantly lost, especially if someone wants to get a closer look at a certain part. The spatial target excels from this point of view, it is able to maintain the parts in the position they are placed, even if I lose sight of them, the only downside is that it has to be placed manually everytime the experience is loaded.
The idea is to have a starting point (for example a thingmark) to which the parts are referenced to, then a person scans the thingmark, the experience is loaded, and somehow it uses the spatial target's capabilitiy of maintaining the parts in the same place, even if the sight of the thingmark is lost.
This is probably a long shot, but does anyone have any idea if this is possible, or maybe even an alternative to this scenario?
Thank you !
What device are you working with? Some are much better at maintain tracking after loss of the thingmark. AFAIK you should be able to lose sight of the thingmark after you acquired it if the device has mapped enough of the environment.
I have tested with an iPhone XR, while enabling "extended tracking" and also "persist map" in the 3D container. But the performance was still bad.