×
Facebook is starting to feed its Meta AI with private, unpublished photos

Facebook is starting to feed its Meta AI with private, unpublished photos

For years, Meta trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram’s servers. Now, it’s also hoping to access the billions of images that users haven’t uploaded to those servers. Meta tells The Verge that it’s not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images.

On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they’d like to opt into “cloud processing”, which would allow Facebook to “select media from your camera roll and upload it to our cloud on a regular basis”, to generate “ideas like collages, recaps, AI restyling or themes like birthdays or graduations.”

By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze “media and facial features” of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to “retain and use” that personal information.

Meta recently acknowledged that it scraped the data from all the content that’s been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it’s only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what “public” entails, as well as what counted as an “adult user” in 2007.

Meta tells The Verge that, for now, it’s not training on your unpublished photos with this new feature. “[The Verge’s headline] implies we are currently training our AI models with these photos, which we aren’t. This test doesn’t use people’s photos to improve or train our AI models,” Meta public affairs manager Ryan Daniels tells The Verge.

Meta’s public stance is that the feature is “very early,” innocuous and entirely opt-in: “We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only and only shown to you – unless you decide to share them – and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test,” reads a statement from Meta comms manager Maria Cubeta.

On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.

And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.

Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.

The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.

Correction, June 27th: An earlier version of this story implied Meta was already training AI on these photos, but Meta now states that the current test does not yet do so. Also added statement and additional details from Meta.

Source link
#Facebook #starting #feed #Meta #private #unpublished #photos


The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into ClaudeAnthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the PackUnlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.Other Captioning Glasses I TestedThere are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.Photograph: Christopher NullPhotograph: Christopher NullPhotograph: Christopher NullLeion’s Hey 2 is the price leader in this market, and even its prescription lenses ( to 9) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is  for 120 minutes,  for 1,200 minutes, and 0 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).Photograph: Christopher NullPhotograph: Christopher NullYou’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: /month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; /month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add 0 to 0.Photograph: Christopher NullPhotograph: Christopher NullAirCapsAirCaps Smart GlassesAirCaps does not make its own prescription lenses. Instead, you must purchase a pair of  “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ( for two) to one of the arms, which can provide 12 to 18 extra hours of juice.The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For /month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.Photograph: Christopher NullPhotograph: Christopher NullThe most expensive option on the market (up to ,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For /month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between  and 0.#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the Pack

Unlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

  • Photograph: Christopher Null

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Post Comment