Community Tip - Need to share some code when posting a question or reply? Make sure to use the "Insert code sample" menu option. Learn more! X
Hello community,
I've just came across the Azure remote rendering adv.
That looks impressive and I started wondering if it's even possible to use it with Hololens2 and Vuforia Studio.
In Microsoft doc. they speak about the implementation of SDK for Unity...
Any chances it's a way to go with Studio?
It would be so good to take advantage of it especially when using models with millions of polygons.
Best regards
Thanks!
Hi @AndreaT ,
so far I know is this not possible in the current functionality yet and I do not have information if something is planned in the next time. I checked some internal resources but could not find anything about usage of Azure Remote rendering. Vuforia app use some Azure api e.g. the location Mixed RealitySpatial Anchors on HoloLens for the capture and view app, so that theoretically it could use e.g. Render APIs) I am not sure what are your requirements - do you want to render all widgets or do you want to render only specific 3D model. Do you have models where you reach some limits of graphic performance so that you want to use this API.
In case that you intent to render everything in this case – possibly you can create your own application with unity API and use the Vuforia Engine features. https://library.vuforia.com/
I want to mention the following points.
Thank you @RolandRaytchev for so many informations.
We use Vuforia Studio because in my office there're no experienced software developers, I'm the only one programming but at a very simple level. It's all good as long as we're in the maintenance but when it comes to experiences where we want to show the customer the whole system we can easily reach upt to 5-6-7 millions of polygons and that's way beyond Hololens2 capabilities. Therefore the interest in the remote rendering feature.
I've never read anything about Unity but if there's no much developing knowledge needed for basic applications I might think about it, of course in that case there'd be the Vuforia Engine license to be taken into account...
I also had a meeting with an expert from Microsoft regarding Azure Remote Rendering.
He confirmed that there are APIs to use and also that currently Unity is the way to go.
I asked if those APIs were accessible via JavaScript but he couldn't answer to the direct question, he mentioned though that being Unity the official environment, C# is the language that for sure can be used to call the APIs.
So there are indeed external APIs, still to be figured out whether they can be called via JavaScript or only C#...
There is plenty of documentation in Microsoft Docs portal (which I still have to read because of complete lack of time) and here's the link https://docs.microsoft.com/en-us/azure/remote-rendering/
For example the guy from Microsoft spoke about C# but in the Docs I see references to C++ so I'm wondering if he just mispoke and there are actually possibilities to call APIs via JavaScript..
Could you give it a read given your expertise?
Thanks
Best regards and thanks again for the tips!
Just to be clear, View does not use Unity.
PTCs advanced engineering team have done done very detailed experiments using Azure Remote Rendering and we have promising results.
The real question is what is the usecase, the need for very large scale rendering.
Most AR projects bring digital information into a physical world, and that digital information tends to be augmentation / additional data i.e. ADDING to the already-complex physical world.
There are usecases which are more Mixed/VR type experiences, such as digital protoyping and design review, where there is no physical product.
This is where we have done some experimentation.
if this is where your usecases sit, it would be interesting to learn more.
Hi @SteveGhee , sorry for the long time it took me to answer but we're pretty busy these days.
By the way, my case is exactly more on the Mixed Reality side. We're not concentrating on augmenting only digital information for the machine from Thingworx, we don't do it at all at the moment.
The company where I work at produces machines quite big machines.
Our target is to go to the customer with a "digital twin" in AR/MR and let him see how it would be in his site, introducing a low level of configurability. That's why I opened this post, because at the moment Hololens2 alone can't give a stable and smooth experience due to th dimensions of the machines (number of polygons).
Looking forward to hearing from you
Andrea