NVIDIA Releases New AR SDK to Create 3D Facial Meshes Using a Webcam and RTX GPU

The FPS Review may receive a commission if you purchase something after clicking a link in this article.

Facial Mesh Example
Image Credit: NVIDIA

NVIDIA rolled out a new Beta today. The new AR SDK currently offers new options using nothing but a webcam and tensor cores. Tensor cores are unique to their RTX GPU line and provide AI deep learning options. Using them in this fashion enables a webcam to obtain the needed information for face tracking. In turn a 3d mesh can be created. Specific facial features such as lips, eyes, nose, eyebrows, can be tracked within 3 degrees of freedom. A more complete mesh of a human face and including head pose have 6 degrees of freedom.

From NVIDIA

“Now available as an open beta, it enables several use cases:

  • Tracking faces on the camera to identify one or several subjects on an image, 
  • Overlaying assets on a person to put costumes or effects on people,
  • Controlling an animated or game character with your face and head movements.”

This SDK is also being implemented in a new plugin for OBS called StreamFx. They go on to state how normally it would have taken “months if not years”, if not for tensor cores. Instead the task is done in real time using the hardware accelerated NVENC H.264 encoder. So far tensor cores have mostly be known for their use in NVIDIA’s deep learning super sampling for video games. DLSS has recently advanced to its next iteration with DLSS 2.0.

Peter Brosdahl
As a child of the 70’s I was part of the many who became enthralled by the video arcade invasion of the 1980’s. Saving money from various odd jobs I purchased my first computer from a friend of my dad, a used Atari 400, around 1982. Eventually it would end up being a lifelong passion of upgrading and modifying equipment that, of course, led into a career in IT support.

Recent News