Snapchat filters are fun, but how do they work?

Snapchat filters use machine learning algorithms, just like our ImageR™ tech, to detect the face in each image.

 

Viewing these images as a set of data with colour patterns, the program then looks for areas of contrast. Next, it calculates the difference between light and dark parts. For example, the eye socket is darker than the forehead.

 

Using their Active Shape model, a facial detection algorithm, Snapchat has devised an ‘average face’, which it will match to the image from your phone’s camera, locating your facial features. How have they formulated this average face? Machine learning algorithms – a complex method that gives an instantaneous outcome.

 

That’s right, behind the entertaining photo filters, Snapchat spent a whole lot of time manually marking the borders of facial features on literally hundreds of sample images. Using machine learning, they then created an algorithm to detect facial features automatically. As faces come in all shapes and sizes, Snapchat combines the Active Shape model with the ability to detect differences in shading to recognise where your eyes, forehead, nose and mouth are thus matching their algorithm to your face.

 

Next, the Active Shape model adjusts to create a mesh, or 3D mask that will move, rotate and scale along with your face. This is how they can then contort features, such as changing eye colour, adding accessories and setting animations to trigger when you open your mouth or move your eyebrows (just like the cute little puppy you become).

 

So, there it is! If you’re ever wondering how you’d look with bunny ears, or as a Teletubby or even a bumblebee (it happens all the time, right?), you now know how Snapchat creates these wonderful illusions.