Billions of people rely on their smartphones every single day as their primary and often only means of communicating with the outside world. Since the vast majority of our interactions with each other happen when we are not physically together, the smartphone is the enabler for most of the interactions we will have in our lifetimes.
The evolution of this type of distant communication started with the handwritten letter, which was both slow and limited to one-to-one communication. Next was voice over the telephone, which is immediate but lacks visual cues. Today we heavily rely on electronic forms of communication, which has the benefit of being both immediate and visual. We also gained the ability to communicate one to many, potentially one to billions like we see with viral videos. Who hasn’t seen “Gangnam Style”?
As the smartphone is today, when we communicate electronically there are some elements that get “lost in translation,” mostly due to the lack of eye contact and ability to see facial micro-expressions. One of the biggest reasons is because the cameras that all of us rely on for these highly desired interactions are lacking the required type of stabilization.
When we’re talking with someone face-to-face, we naturally look them in the eyes. We seem to be able to effortlessly lock eyes with them, without even having to think about it. As opposed to the screen, where one moment the face is on one side of the frame, and the next moment it has jumped all the way to the opposite side. A face that appears to be moving all over the place unpredictably makes it impossible to track the eyes, which is what your brain is constantly telling you to do. This isn’t only relevant to conversations. Even when someone doesn’t have a verbal message for us, we will still try to look them in their eyes. When you pass someone on the street you automatically make and maintain eye contact with them, often until their eyes are no longer in view. It’s a way to understand and predict each other, an element that connects us. Even babies as young as 1 month can detect and track faces, making eye contact when other people are near. This very innate behavior is often missing when we interact electronically.
We’ve seen big players like Google and Apple enhance the back camera on mobile, from increasing the resolution and frame rate to adding automatic focus and brightness adjustment, high dynamic range, electronic image stabilization (“EIS) and even optical image stabilization (“OIS”) of the entire frame. You can now smoothly capture cars driving over the Golden Gate Bridge during rush hour, but you still can’t capture yourself running wildly on the beach without making viewers sick. That’s because when there is a main object of interest very often the result remains nauseating to watch, since it may still appear to be moving all over the screen due to the lack of object-specific stabilization. Most front-facing cameras, unfortunately, do not have, EIS, OIS, or object-specific stabilization.
Imagine if your video was not only non-shaky, but the subject was always nicely composed at or near the center of the frame, almost regardless of the camera’s movements? All of this is what inspired us to create the world’s first face-specific stabilization camera on a smartphone, an app that elevates the front-facing camera, allowing for stable and intimate electronic interactions. Sure, we’ve been able to get by with the front-facing camera for selfie snaps (with the help of fun filters), and we manage to FaceTime with the ones near and dear to us. But how many times have you wanted to ask your kid nephew to just stay still on FaceTime so that you don’t get motion-sick?
1. Bye-bye nausea
Face-stabilized videos, no matter how action-packed, do not make us sick.
2. Eye tracking and micro-expressions
The most critical aspect of our interaction is our message, and that is quickly lost when we’re unable to track the eyes or notice the ever-important micro-expressions on a person’s face. Since stabilizing on the face allows for more of these important details to be visible, we’re able to facilitate a larger connection between the viewer and the subject, ultimately bringing people closer together.
3. Facelapse / Selfielapse effect
Is this all just about more stable FaceTime and selfie videos? Absolutely not! Just like cars were not simply faster horses, this kind of technology also enables a whole lot more. We don’t only remove additional shakiness; by knowing the face will remain in the center we are able to further automate aspects of the video-making process that would normally only be done as part of professional video editing, ultimately empowering people to communicate more effectively. Regular videos of yourself filmed with a smartphone are already quite nauseating to watch, speed one up and the result is even worse. When was the last time you watched a sped up selfie video that wasn’t shot with a GoPro or DJI?
Back-camera timelapses on the other hand, are common and have been for some time. A hyperlapse is interesting because it shows the world a sped up version of what you see when you move through life, but what about a sped up version of what the world sees when you’re moving through life? When the face remains stable in the center of every frame it enables a whole new hyperlapse type of video, a face-lapse that was not possible before: a good-looking sped up video of yourself! We believe this is important because one of the most effective ways to show off your uniqueness is to actually show how the world looks different with you in it; super stable and centered, possibly at an increased speed to make it extra interesting.
4. Selfie to the beat
What about that friend on social media that always has the best short clips, with coordinated music that matches the cuts between scenes? We’d like to enable everyone to share their lives in this simple and digestible way, which allows for interesting and cohesive storytelling. By intelligently reassigning the order of the stable frames in a video to match the music, we can enable anyone to create short, dramatic videos in seconds, a format that was previously only attainable by professional cinematographers and vloggers. It doesn’t have to be just one video, we can take several videos with a bunch of different backgrounds, and automatically stitch them together, with awesome music and cuts to create a sort of selfie time-lapse music video. For example, you can create a video for each day of your vacation and combine them at the end of the trip for a cinematic recap video.
If a picture is worth a thousand words, a video is worth even more, especially, a video where the subject’s eyes and micro-expressions are visible at all times. This is only the beginning, but the future selfie camera is here!