The PTC Academic Team developed a simple Android App utilizing the Vuforia SDK that demonstrates Augmented Reality using FIRST robots, and released it for the FIRST World Championship 2016 in St. Louis, MO. For more information, please check out - PTC+FIRST Augmented Reality Robots - Android App
Have any FIRST teams used or plan to use the Vuforia SDK for image recognition and tracking on their FIRST Robots this season or last? If so, please tell us about it and share with the community how you you implemented image recognition and tracking using Vuforia.
If you and your team would like to try Vuforia - please check out this document that tells you how to get access to the Vuforia SDK - Augmented Reality and PTC's Vuforia for FIRST Teams
Our FTC team (FIX IT 3491) has worked with Vuforia. Last season, we played around with some image processing. It had a basic purpose, but it was still difficult to complete. When we learned of Vuforia, we jumped at it. So far, we've tracked several objects (e.g. calculator, toothpaste) and, since we have a little experience with image processing in OpenCV, we've come to appreciate just how nice Vuforia is.
We've edited the code from the Vuforia sample app to work with the FTC code and extract the rotation/location of objects that the camera can see. We figured that, since we already did the work, there wasn't any reason for others to do it again (unless they're curious, of course). So we're posting a "Vuforia in FTC" tutorial on Youtube that describes the importing process and how to begin object tracking in the FTC app. For anyone interested, here's the link: Vuforia in FTC - YouTube. So far, there are two videos with a few more on the way.
From our experience so far, we love Vuforia. It's really easy to begin tracking objects and use them in your OpModes. We definitely recommend using this in the next season, even if it's only in your pit displays (although there are many uses on the field).
Hello. Rookie team 14167 (which I am a mentor for) will definitely be using vision and Vuforia before the season is over. We were very happy to have a competitive (albeit sensor-less) robot for our first meet (we ended up 5th). If we can stick to our plan then we'll have our first attempt at vision by our 3rd meet and we'll definitely be sending the students here for help from PTC's resources and to share what we learn.
The plan did not quite hold, but close. We've just gotten Vuforia and Tensor Flow running in test mode for shape recognition using the model provided in Github. Next we need to integrate it into our master autonomous Op Mode, expand on it, and test it in real-match situations. The students are on it.