Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Acquired LIVE event at the Chase Center in San Francisco, California, US, on Tuesday, Sept. 10, 2024.
David Paul Morris | Bloomberg | Getty Images
Meta announced the Quest 3S, the latest virtual reality headset to come out of the company’s Reality Labs division and a cheaper offering than its predecessor.
The device will go on sale on Oct. 15 and will retail starting at $299, down from the $499 starting price for 2023’s Quest 3. The device can be used to watch movies, as well as run VR fitness apps and gaming, Meta said Wednesday at its Connect event at its headquarters in Menlo Park, California. The company positioned the headset as a multitasking computer, putting it in competition with Apple’s $3,499 Vision Pro headset that launched in February.
Meta’s previous Quest devices are the bestselling VR headsets, with millions shipped thanks to heavy marketing and a lower price than many competitors, but those efforts have yet to spark a cultural phenomenon or a mainstream software ecosystem around VR. Including its acquisition of Oculus in 2014, Meta has poured more than $65 billion in expenses into its hardware efforts.
“I’ve been waiting for this one for a long time,” Zuckerberg said.
Meta CEO Mark Zuckerberg has defended the company’s spending as a strategic initiative to prevent Apple from controlling future hardware platforms.
Although there was hope among VR developers that Apple’s entry into the market would spur a wave of new apps and users, Apple hasn’t revealed sales for its headset and reports say that sales have been in small volumes, under 1 million units, partially due to its high price.
What it does
A Meta representative said the “S” stands for “start” — as in getting started with VR.
Many of the new Meta features that the company discussed on Tuesday for its $299 Quest 3S have counterparts on Apple’s Vision Pro, including a mode that allows for the device to be used on an airplane and another that simulates a large movie theater inside the headset.
Meta highlighted improved “passthrough,” the term used to described when a VR headset uses cameras and sensors on the outside of the device to display live real-time video inside the headset. That function is intended to make users feel like they are looking through a display and allows them to interact with the real world while keeping the headset on. For the Quest 3S, Meta added a dedicated button to turn on passthrough.
The company has emphasized the ability of the Quest 3S to multitask and run apps, positioning it as a computing device, instead of a game console.
“All the things you can do with a general purpose computer, Quest is the full package,” Zuckerberg said.
In demos provided Tuesday, Meta showcased the device running as many as four apps at one time on floating screens inside the headset, including a YouTube video, a browser, Amazon Music and Meta’s app store. Meta says the headset can handle six windows. But the demo experience was not smooth. The Amazon Music app crashed, window controls would disappear and Meta’s controllers would fall asleep after a few minutes if the user wasn’t pressing buttons.
Besides the Quest 3S, Meta also announced a price cut for last’s year Quest 3, bringing the price of the 512GB version down from $650 to $500. The Quest 3 has more advanced lenses and a superior screen with a higher resolution than the Quest 3S.
Additionally, Meta said it will discontinue the Quest Pro, its $999 headset launched in 2022 that never gained much momentum.
Eventually, glasses
Meta Orion AR glasses prototype
Meta
Zuckerberg’s justification for spending so much on VR and augmented reality is his belief that the technology will eventually end up in lightweight, transparent glasses that overlay computer graphics and information onto the real world.
Investing in VR software and hardware are early steps toward those glasses, which could take as much as a decade to develop, Zuckerberg has previously said.
Zuckerberg showed off an early concept of what those glasses could look like on Wednesday. The thick, black-framed prototype, called Orion, won’t be sold to consumers, but Meta says they will be used internally as the company continues working toward the consumer glasses it hopes to one day sell.
“This is where we are going,” Zuckerberg said.
Meta hopes the next version of Orion will be available to consumers as the company’s first full AR glasses, Zuckerberg said without giving a timetable for when that may be.
Zuckerberg said the device revealed on Wednesday was Meta’s first “fully-functioning” prototype of the glasses, and would be physically tethered to a small “puck.” Zuckerberg also said that it would take advantage of a wrist-based interface, which came out of stems from the company’s 2019 acquisition of CTRL-Labs.
Zuckerberg said that Orion enables users to play games, multi-task with multiple windows. and videoconference with people around the world represented by a realistic avatar.
Meta’s Orion prototype comes a week after Snap announced its fifth-generation Spectacles AR glasses. Those thick-framed glasses will only be made available to developers, who must commit to paying $99 a month for one full year if they want to build AR apps for the device.
This isn’t the first time Meta publicly revealed a prototype of a future devices or research projects to signal to investors and employees where VR and AR technology is headed. The Orion glasses are an improvement on Project Nazare, prototype smart glasses that Zuckerberg announced in 2021, when the company changed its name from Facebook.
Ray-Ban Meta smart glasses are powered by a Qualcomm chip. Qualcomm, Samsung and Google are working on smart glasses, according to Qualcomm CEO Cristiano Amon.
Nurphoto | Nurphoto | Getty Images
Meta does sell a pair of glasses with a built-in camera in partnership with EssilorLuxottica called Ray-Ban Meta, which start at $299. While these glasses don’t have any displays, they do have tiny speakers that allow the device to play music or interact with Meta AI, the company’s voice assistant.
As part of Wednesday’s event, Meta announced new Meta AI features for its Ray-Ban smart glasses.
For example, the Ray-Ban Meta glasses will be able to detect when a user is looking at a sign in Spanish and, if asked, can translate in the user’s ear, a new improvement, Meta said. The camera can scan QR codes, and it can also extract information like book titles out of photos it takes.
Another new capability for the glasses is the ability to remember facts like where the user parked.
Li-Chen Miller, the vice president of product in charge of Ray-Ban Meta glasses, told CNBC that when she travels, she uses the glasses to take photos of her hotel room door, and later, she asks Meta AI to recall the number.
Zuckerberg is excited about the Ray-Ban Meta smart glasses, which have sold more than 730,000 units in their first three quarters, according to market researcher IDC. In July, he told investors that they were “a bigger hit sooner than expected.”
Last week, EssilorLuxottica and Meta announced that they had extended their partnership to develop more smart glasses.
AI that speaks
Zuckerberg also introduced improvements to its Meta AI chatbot that will allow people to interact with it using their voice instead of written text.
Users will now be able to have natural voice conversations with Meta AI, which is accessed through Meta apps like Messenger and Instagram. Users will be able to perform actions using their voice, such as telling Meta AI to take a photo by talking to their smartphone.
For Meta AI’s new feature, the company is using computer-generated voices from celebrities including Awkwafina, Judi Dench, John Cena, Keegan-Michael Key and Kristen Bell.
The new Siri-like Meta AI voice feature will be available over the next month for U.S., Canadian, Australian and New Zealand users of WhatsApp, Instagram, Facebook and Messenger.
The feature comes one day after rival OpenAI, the maker of ChatGPT, announced an advanced voice feature for people who pay its premium service.
The company said that the new chatbot features are based on Meta’s AI model, Llama. The company also announced a newer version of Llama, called Llama 3.2. This updated model can understand both images and text, an upgrade from its predecessors which generated responses to people’s written prompts.