Here’s the text with links in English:
AugmentedReality.by is a leading provider of innovative augmented reality (AR) solutions, specializing in creating immersive experiences for businesses and individuals. Our expertise includes:
Snapchat Lenses & WebAR – Engaging, interactive filters and effects for Snapchat, along with browser-based AR experiences that don’t require app downloads.
TikTok Effects – Custom AR effects designed to enhance user engagement and boost visibility on TikTok.
iOS AR Apps – Development of high-performance AR applications for iOS, utilizing the latest ARKit technology.
WebAR with 8th Wall – Cutting-edge WebAR experiences powered by 8th Wall, accessible across devices directly through web browsers.
Meta Quest AR – Immersive augmented reality solutions for Meta Quest, pushing the boundaries of spatial computing.
By blending the digital and physical worlds, we help brands engage their audiences in unique ways, enhance learning experiences, and streamline operational processes. Whether you’re looking to boost customer interaction with interactive content or implement AR technology in your workflow, AugmentedReality.by delivers customized solutions that bring your vision to life.
Snapchat has unveiled its first-ever generative AI Video Lenses, powered by an in-house-developed AI model, the company revealed in an exclusive announcement to TechCrunch. The three new AI-powered Lenses are available exclusively to Snapchat+ subscribers, who pay $15.99 per month (previously referred to as Snapchat Platinum in earlier reports).
This launch follows Snap’s introduction of an AI video-generation tool at its Partner Summit in September 2023. A company spokesperson confirmed that the new AI Video Lenses utilize upgraded versions of this underlying technology.
Snapchat’s Push into AI and AR
While Snap has long been a leader in augmented reality (AR), it has increasingly invested in AI-driven features to compete with rivals like Instagram and TikTok. The new AI Video Lenses aim to offer unique, interactive experiences not yet available on other platforms.
Initially, Snapchat is rolling out three AI Video Lenses, with plans to add more weekly. The first batch includes:
How to Use the AI Video Lenses
Users can access the new Lenses through the Lens carousel, select their preferred effect, and capture a video via the front or rear camera. The AI-generated video then automatically saves to Memories.
“These Lenses, powered by our in-house generative video model, bring cutting-edge AI tools to Snapchatters in a familiar format,” the company stated in a blog post. “We’ve always been early adopters in AR and AI, and we can’t wait to see what our community creates.”
Snap’s Shift to In-House AI Models
While Snap has previously relied on third-party AI tools (such as those from OpenAI and Google), it is now developing its own AI models to enhance performance and reduce costs. Last month, the company introduced a mobile-optimized text-to-image AI model, which will soon power additional Snapchat features.
By leveraging in-house AI, Snap aims to deliver high-quality, cost-effective AI experiences to its users while staying ahead in the social media innovation race.
Would you try these new AI Video Lenses? Let us know in the comments!
Since its launch in 2020, Project Aria has enabled researchers across the world to advance the state of the art in machine perception and AI, through access to cutting-edge research hardware and open-source datasets, models, and tooling. Today, we’re excited to announce the next step in this journey: the introduction of Aria Gen 2 glasses. This next generation of hardware will unlock new possibilities across a wide range of research areas including machine perception, egocentric and contextual AI, and robotics.
For researchers looking to explore how AI systems can better understand the world from a human perspective, Aria Gen 2 glasses add a new set of capabilities to the Aria platform. They include a number of advances not found on any other device available today, and access to these breakthrough technologies will enable researchers to push the boundaries of what’s possible.
Compared to Aria Gen 1, Aria Gen 2’s unique value proposition includes:
Our decade-long journey to create the next computing platform has led to the development of these critical technologies. At Meta, teams at Reality Labs Research and the FAIR AI lab will use them to advance our long-term research vision. Making them available to academic and commercial research labs through Project Aria will further advance open research and public understanding of a key set of technologies that we believe will help shape the future of computing and AI.
The open research enabled by Project Aria since 2020 has already led to important work, including the creation of open-source tools in wide use across academia and industry. The Ego-Exo4D dataset, collected using the first generation of Aria glasses, has become a foundational tool across modern computer vision and the growing field of robotics. Researchers at Georgia Tech recently showed how the Aria Research Kit can help humanoid robots learn to assist people in the home, while teams at BMW used it to explore how to integrate augmented and virtual reality systems into smart vehicles.
And Aria is also enabling the development of new technologies for accessibility. The first-generation Aria glasses were utilized by Carnegie Mellon University in their NavCog project, which aimed to build technologies to assist blind and low-vision individuals with indoor navigation. Building on this foundation, the Aria Gen 2 glasses are now being leveraged by Envision, a company dedicated to creating solutions for people who are blind or have low vision. Envision is exploring the integration of its Ally AI assistant and spatial audio using the latest Aria Gen 2 glasses to enhance indoor navigation and accessibility experiences.
Envision used the on-device SLAM capabilities of Aria Gen 2, along with spatial audio features via onboard speakers, to assist blind and low-vision individuals seamlessly navigate indoor environments. This innovative use of the technologies, which is still in the exploratory and research phase, exemplifies how researchers can leverage Aria Gen 2 glasses for prototyping AI experiences based on egocentric observations. The advanced sensors and on-device machine perception capabilities, including SLAM, eye tracking, hand tracking, and audio interactions, also make them ideal for data collection for research and robotics applications.
Over the coming months, we’ll share more details about the timing of device availability to partners. Researchers interested in accessing Aria Gen 2 can sign up to receive updates. We’re excited to see how researchers will leverage Aria Gen 2 to pave the way for future innovations that will shape the next computing platform.
In 2005, New York City’s Central Park was transformed into a vibrant saffron river as 7,000 arches of the installation “The Gates” finally came to life after decades of planning. The project by Christo and Jeanne-Claude became a symbol of the city’s revival after the 9/11 tragedy, attracting over four million visitors—three times the usual number.
Now, 20 years later, the legendary installation returns—in digital form. To celebrate the anniversary, the Christo and Jeanne-Claude Foundation, The Shed, Central Park, NYC Parks, and Bloomberg Philanthropies have organized a large-scale retrospective.
The Shed hosts an exhibition featuring original sketches, models, and archival materials, while Central Park itself has become a site for an AR experience. Using the Bloomberg Connects app, visitors can once again walk beneath hundreds of saffron-colored arches, virtually recreated along the path between the park’s east and west sides at 72nd Street.
The route is fully accessible to visitors with limited mobility, and augmented reality technology brings “The Gates” back to life in a whole new dimension.