The Animoji feature in the text message, of course!
Simply put, it is through face recognition technology, using the image provided by the system to record their own exclusive dynamic expression.
This amazing feature helps us express things that we don’t know how to express through words. In addition to emoticons, people can record sounds to express their mood, and they can copy the recorded emoticons and send them to social media.
How do these dynamic expressions, which you can’t stop playing, come into being?
This is not possible without face recognition technology. Instead of relying on two-dimensional images, the iPhone X uses structured light — which adds depth information compared to normal technology. By projecting light onto the face and reading light information from the surface, the iPhone can determine the three-dimensional shape of the face.
Unlike previous models, iPhone X adds an infrared lens, floodlight, floodlight sensor, and dot matrix projector to the front camera, microphone, and distance sensor.
When we stare at our phone screen, it projects a lattice of more than 30,000 invisible points of light onto our faces. As the face is uneven, the lattice changes its shape. By reading the dot matrix pattern through the infrared lens, and then combining it with the algorithm trained with a large amount of facial expression data, the face with depth information, namely the real three-dimensional model of the face, can be obtained.
Thanks to the linkage of several devices, when we aim at the face recognition area of Animoji, the system generates a three-dimensional model of our face that changes as our expressions change. The various Animoji options are like “3d masks”. When we make various expressions, the corresponding 3D mask changes its shape to create the “I laugh and it laughs” Animoji expressions.
The Animoji experience may not be so smooth for those who have performed too much. Once the action deviates from the recognition area, such as a large head shake, the “three-dimensional mask” will freeze in confusion. Play a few more times can grasp the better.
If the range of action is too large, there will be the possibility to be out of the box. In addition to structured light, ordinary, image-based face recognition can perform similar functions.
For example, many people are familiar with the game Face Rig. All you need is a camera, so you can see the same expression, a different version of yourself.
Face recognition technology is widely used around us, and even many AI applications in other directions have entered our lives.
Bytebridge, a human-powered data training platform, provides high-quality services to collect and annotate different types of data such as text, image, audio, and video to accelerate the development of the machine learning industry.
- Dealing with complex tasks, the task is automatically transformed into tiny components to minimize human errors
- The real-time QA and QC are integrated into the labeling workflow as the consensus mechanism is introduced to ensure accuracy
- Consensus — Assign the same task to several workers, and the correct answer is the one that comes back from the majority output
- All work results are completely screened and inspected by machines and the human workforce
- In this way, ByteBridge can affirm our data acceptance and accuracy rate is over 98%.
We can provide personalized annotation tools and services according to customer requirements.
For more information, please have a look at bytebridge.io, the clear pricing is available.
Please feel free to contact us: email@example.com
1 What are the Most Impressive AI Products?
2 Why the High-Quality Training Data is so Important to AI Machine Learning?
3 Data Labeling and Annotation Outsourcing Service
4 No Bias Training Data — the New Bottlenecks in Machine Learning
5 Data Annotation and Labeling for ML Projects in 2021