How AR technology began to rise is more promising than VR technology. The rise of the new social AR has caught everyone's attention, and everyone is very curious about how he achieved it. This article will talk about the implementation of social AR technology.
In 2014, a Ukrainian startup called Looksery used this technology to create a digital makeup selfie app for consumers, which has been downloaded more than a million times. Snapchat discovered a potentially huge market and invested in the acquisition of Looksery. Six months later, this brought Snapchat to the now-famous "Lens" product. On the other side, Facebook feels the danger of competitors, who acquired the team behind the MSQRD app in the second year after that. The self-portrait AR, a human-centered computer vision, suddenly caused a sensation around the world and became a key weapon in the struggle between the two social media giants.
In 2017, both Apple and Snapchat launched SLAM-based ARKit and World Lenses for the first time (allowing devices to place digital objects on the plane), while Facebook brought AR Studio to the market (allowing developers to create their own) AR filter). However, there is news that related efforts have not yet become extremely popular among users.
So what is next? For us, this is the rise of new social AR. Social AR can not only serve as a bridge between the self-portrait AR stage and the glasses AR, but its potentially related technologies are likely to become a key component of the future. To do this, we need to develop a neural network that detects and tracks portraits in real time in all configurations (not just self-portraits). However, this presents us with a series of challenges.
For self-timer AR tracking using a front camera, it is basically a special case when identifying and tracking a portrait. Moving from the front camera to the rear camera, we will likely encounter other situations, such as:
The object is more likely to deviate from the center position relative to the camera.
They can appear at different distances or in different sizes.
They are often not facing the camera, so we can't just look for faces, but look for heads, hair, hats and various related features.
Multiple portraits often appear in the view.
In order to realize our technology, the above is what we need to overcome. So, what is the working principle of technology? Below we break it down into four parts:
1. Multiple head and body testingOur technology detects multiple heads and bodies in real time. After providing the user's camera image, the application can identify the area of ​​the image in which the head and its corresponding body are displayed.
What can this do? This allows us to estimate the distance of the portrait based on the size of the head. For the body, we can anchor any visual information about the portrait movement.
2. Continuous personal tracking in the scene/viewIn order to track the actions and features of a portrait in a scene, we compare multiple head and body information based on multiple frames. In this way, even if they are surrounded by others, even if they re-enter after leaving the camera view, we can fix the visual information on a specific portrait.
3. Separate background and whole body segmentationFor each target tracking portrait, we further classify the pixels belonging to the face, skin, hair, clothes and background. In this way, we can clearly separate a series of different layers, and then we can use it for advanced blending of AR effects. If this is not the case, the device can only be implemented via light field or depth sensing technology, which is not practical for current smartphones.
4. EditorWe have specially trained our neural network to create layers that any designer can easily interact with and operate. Because the network is based on simple mathematical calculations, it's easy to achieve the same quality on both desktop and mobile devices. This allows designers to quickly iterate and design visuals for Spilly applications using our custom editor.
The above is the specific working principle. Let's take a look at some of the use cases of the technology:
We've developed three social AR apps: Encourage people to get together, play their favorite stars, play around with each other and more.
Gaming experience: People can now become positionable and actionable characters in the game.
Fashion app: Applying clothes/filters to users is not only fun, but you can also make virtual “try on†and buy.
Put yourself in third-party content: With our full-image segmentation technology, users can place their heads on portraits in any video to complete a “head transplantâ€. Don't know if you want to be the protagonist in your favorite movie?
There are many potential use cases. In the era of glasses AR, people are likely to become a series of interactive situational triggers, such as indoor life scenarios (such as reminders specific to someone, such as reminding her husband to make breakfast), personal details, or a richer gaming experience. Next, we may see outdoor interactions involving commercial transactions (such as personal payments for "category advertising sites" projects), as well as visual enhancements, you can expect the same motivation behind Tumblr and Pinterest (both are photo sharing applications) to expand to You are personal.
In short, people manage this technology-driven world, and this human-centered visual technology advancement will only further connect people to technology. We are moving towards a future where smartphone-based business, entertainment and self-expression will be liberated from the shackles of the screen, and the digital world will be directly integrated into the world before us. We need to be cautious, but the value of this potential future is clearly visible.
Power Distribution Engineering Solution
Power Distribution Engineering Solution,Enterprise Distribution Room Configuration,Small Substation Power Facilities Configuration,Medium-Sized Substation Power Facilities Configuration
TRANCHART Electrical and Machinery Co.,LTD , https://www.tranchart-electrical.com