cliftonneff Posted December 26, 2017 Share Posted December 26, 2017 I've been playing with the idea for using a custom app on the iphone X using facial recognition to capture data to create the tracks for singing faces. The reality is that the animojis on the iPhone X are not that different than complex versions of what we do with "singing faces". Creating singing faces may be the most painful of all of the programing that can be done on a display but I was wondering if this wouldn't be an effective way to automate that process. It would look something like this: - Custom Application written on iOS to capture a few parameters of mouth movement using the facial recognition features of the iPhone X. - Sing the song along with the music to capture the data. May need to over exaggerate the mouth movements. - Post processing to get the data into Light-O-Rama (backdoor approach for editing the track data). Could be repeated for each of the parts of the song if you use multiple singing faces. Anyone have any thoughts on this? Link to comment Share on other sites More sharing options...
ItsMeBobO Posted December 26, 2017 Share Posted December 26, 2017 Go for it! I love the idea. This technique could be used only for high pixel count displays. Link to comment Share on other sites More sharing options...
cliftonneff Posted December 26, 2017 Author Share Posted December 26, 2017 Intended it more along the path of a "4 channel face" than using a high pixel display. It would be more processing of data to capture which of the 4 mouths should be used for each period time. Link to comment Share on other sites More sharing options...
k6ccc Posted December 26, 2017 Share Posted December 26, 2017 Great idea! Run with it and let us all know if you get something that works well! Link to comment Share on other sites More sharing options...
a31ford Posted December 26, 2017 Share Posted December 26, 2017 AGREED... Greg Link to comment Share on other sites More sharing options...
Recommended Posts