NVIDIA rolled out a new Beta today. The new AR SDK currently offers new options using nothing but a webcam and tensor cores. Tensor cores are unique to their RTX GPU line and provide AI deep learning options. Using them in this fashion enables a webcam to obtain the needed information for face tracking. In turn a 3d mesh can be created. Specific facial features such as lips, eyes, nose, eyebrows, can be tracked within 3 degrees of freedom. A more complete mesh of a human face and including head pose have 6 degrees of freedom.
“Now available as an open beta, it enables several use cases:
- Tracking faces on the camera to identify one or several subjects on an image,
- Overlaying assets on a person to put costumes or effects on people,
- Controlling an animated or game character with your face and head movements.”
This SDK is also being implemented in a new plugin for OBS called StreamFx. They go on to state how normally it would have taken “months if not years”, if not for tensor cores. Instead the task is done in real time using the hardware accelerated NVENC H.264 encoder. So far tensor cores have mostly be known for their use in NVIDIA’s deep learning super sampling for video games. DLSS has recently advanced to its next iteration with DLSS 2.0.