×
New Study Finds Smartwatches Aren’t That Good at Measuring Stress

New Study Finds Smartwatches Aren’t That Good at Measuring Stress

Some health enthusiasts swear by smartwatches as a way to monitor stress levels, but a recent study calls into question that common usage. The study, published in the Journal of Psychopathology and Clinical Science, claims that such watches display a very limited ability to actually communicate what a person’s psychological state is. Sometimes, a watch may think the user is stressed when they’re really just excited about something, researchers say.

The report looked at nearly 800 students who wore a Garmin Vivosmart 4 smartwatch and measured their self-reported emotional states against the metrics collected by the wearables. According to the study, the self-reports of the watch-wearers and the analyses provided by the watches bore little resemblance to one another. It notes:

We investigated the concurrent overlap between self-report and wearable sensor data measuring stress, tiredness, and sleep. For the majority of individuals in our sample, we found that self-report and physiological measures of stress show very weak to no associations. These results raise several questions about differences between data sources and potential measurement issues.

Garmin advertises a stress-tracking capability for its smartwatches on its website. “Stress levels (0–100) are estimated by the Firstbeat Analytics engine, primarily using a combination of HR and HRV data. This data is recorded by the optical heart rate sensor on the back of your device.”

However, Garmin seems to admit that the quality and character of stress can be difficult to measure: “Public speaking and running up a flight of stairs can both send your heart racing, but the underlying reasons why are fundamentally different,” its website notes. The company suggests that wearing the watch more frequently can result in better measurements. “You can improve the quality of the insight gained by wearing your device as much as possible, especially while you sleep, because that is when your stress levels will typically be lowest,” the site states. “This helps create a better understanding of the full range of stress and relaxation states that you experience.”

In an interview with The Guardian, one of the study’s authors, Eiko Fried, said that the correlation between the self-reported stress scores that were collected as part of the study and the readings provided by the smartwatches was “basically zero.”

“This is no surprise to us given that the watch measures heart rate and heart rate doesn’t have that much to do with the emotion you’re experiencing – it also goes up for sexual arousal or joyful experiences,” he told the outlet. “The findings raise important questions about what wearable data can or can’t tell us about mental states,” he continued. “Be careful and don’t live by your smartwatch – these are consumer devices, not medical devices.”

The study’s topic has a diverse research history. A 2023 meta-analysis of studies about wearables and stress management found that “the effect of wearable-based approaches on alleviating or reducing stress” had “not been analyzed” and that most studies up until that point had “focused on presenting overviews of wearable devices.” Another study published by researchers at the Vrije Universiteit Amsterdam in 2023 found, much like the recent psychology study, that smartwatches frequently failed to distinguish between excitement and stress. Gizmodo reached out to Garmin for comment on the recent study and will update this story if it responds.

While the study claims Garmin’s wearable didn’t do much to measure stress, researchers found it seemed to provide decent metrics in other arenas. The report says that the watches were very good at measuring sleep, although it notes that “associations were weaker for tiredness.”

Source link
#Study #Finds #Smartwatches #Arent #Good #Measuring #Stress


The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into ClaudeAnthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the PackUnlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.Other Captioning Glasses I TestedThere are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.Photograph: Christopher NullPhotograph: Christopher NullPhotograph: Christopher NullLeion’s Hey 2 is the price leader in this market, and even its prescription lenses ( to 9) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is  for 120 minutes,  for 1,200 minutes, and 0 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).Photograph: Christopher NullPhotograph: Christopher NullYou’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: /month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; /month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add 0 to 0.Photograph: Christopher NullPhotograph: Christopher NullAirCapsAirCaps Smart GlassesAirCaps does not make its own prescription lenses. Instead, you must purchase a pair of  “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ( for two) to one of the arms, which can provide 12 to 18 extra hours of juice.The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For /month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.Photograph: Christopher NullPhotograph: Christopher NullThe most expensive option on the market (up to ,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For /month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between  and 0.#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the Pack

Unlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

  • Photograph: Christopher Null

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Post Comment