Augmented Reality with OpenCV and Unity
A lot of people flip out over Vuforia but they don’t realize the real wizard behind the curtain is OpenCV. Here’s a sample for a job I did that uses the OpenCV face detection feature to not only determine the face but to also determine a face’s orientation and position.
How it works
OpenCV is an open source project that that supports image tracking. OpenCV does things like detect corners and faces. You can use it for spatial tracking and facial recognition. It’s a fat library too. This example was built using Unity and here are the steps i took to build it.
1.) Created a new Unity Project
2.) Imported OpenCV library for Unity. Although OpenCV is open source you still need a port into Unity. At the time of this writing there are several commercial solutions to this available in the Unity Asset store. I have researched them and can say after a few missteps that the best solution so far is from Enox Software. I recommend both the OpenCV and the Dlib FaceLandmark Detector packages. The Dlib gives you 68 two-dimensional points that correspond to key positions on the face. We can translate these into 3D points on our own (which takes forever). Or we can simply call the built-in 2D-to-3D translator “solvePnP” and it will do the matrix translations for you. For example, notice points 49 and 55 in the image. These are the corners of the mouth. If point 49 was higher than point 55 than the person is hanging their head to the side.
Original source: http://www.jtrue.com/code/helmet