×
Deadly ‘Wet-Bulb’ Temperatures Are Smothering the Eastern U.S.

Deadly ‘Wet-Bulb’ Temperatures Are Smothering the Eastern U.S.

An oppressive heat dome has gripped the eastern U.S. this week, prompting the National Weather Service (NWS) to issue heat warnings for nearly 170 million Americans. To make matters worse, severe humidity is making high temperatures feel even hotter.

Extreme heat and humidity make for a deadly combination. The human body lowers its temperature by sweating, and when sweat evaporates, it cools the surface of the skin. Humidity slows this process down, increasing the risk of heat-related illness. To extrapolate the combined physiological impact of heat and humidity, meteorologists look at the wet-bulb temperature. This measurement essentially represents the amount of heat stress the body experiences under hot, humid conditions. It’s also a critical metric for understanding human survivability in a changing climate.

“The wet-bulb temperature is literally the temperature of a wet thermometer’s bulb, traditionally measured by putting a tiny wet sock on the end of a thermometer,” David Romps, a professor of Earth and planetary science at the University of California-Berkeley, told Gizmodo in an email. Similar to a sweating person, the wet-bulb thermometer cools itself by evaporating water, “but a wet-bulb thermometer is not like a person in some important ways,” he explained.

Humans generate body heat, which must dissipate into the air. “Therefore, all else equal, a sweaty person will be warmer than a wet bulb,” Romps said. When the wet-bulb temperature approaches 98.6 degrees Fahrenheit (37 degrees Celsius)—the average human body temperature—it’s extremely difficult to maintain a safe internal temperature. This may lead to severe heat-related illness or even death, he explained.

Experts have long believed that a wet-bulb temperature of 35 degrees Celsius (equal to 95 degrees Fahrenheit at 100% humidity or 115 degrees Fahrenheit at 50% humidity) was the threshold at which the human body can no longer cool itself. In recent years, however, researchers have found evidence to suggest that this threshold is actually much lower.

“Based on our research, a wet bulb temperature of around 87 degrees Fahrenheit [30.6 degrees Celsius] at 100% humidity is the critical threshold above which humans cannot maintain a stable core temperature if they were exposed to those conditions for hours at a time,” Kat Fisher, a PhD candidate in the human thermoregulatory lab at Penn State University, told Gizmodo in an email.

Taking the wet-bulb temperature into account with air temperature, wind speed, cloud cover, and the angle of the Sun gives meteorologists the wet-bulb globe temperature (WBGT), a comprehensive measure of heat stress in direct sunlight. On Tuesday, July 29, the NWS reported WBGT values in the high 80s to low 90s Fahrenheit (upper 20s to mid-30s Celsius) across much of the eastern U.S., particularly in the Southeast and Midwest.

WBGT values above 90 degrees Fahrenheit (32 degrees Celsius) are extreme and can induce heat stress in just 15 minutes when working or exercising in direct sunlight, according to the NWS. Weather officials expect these conditions to persist through Wednesday, July 29, before the heat dome dissipates later in the week.

Over the long term, dangerous wet-bulb temperature events are here to stay. “Human-caused global warming is driving up wet-bulb temperatures, pushing even healthy people closer to their physiological limit. And that limit is real,” Romps said. The human body is physiologically incapable of withstanding wet-bulb temperatures around or above its internal temperature, he explained.

As the atmosphere warms, it can hold more moisture, increasing the frequency and intensity of extreme wet-bulb temperatures. Climate models suggest that certain regions of the world could see wet-bulb temperatures regularly topping 95 degrees Fahrenheit (35 degrees Celsius) within the next 30 to 50 years, according to NASA. In the U.S., Midwestern states like Arkansas, Missouri, and Iowa will likely hit the critical wet-bulb temperature limit within 50 years.

“Throughout the 300,000 years of our species, there has been no need to tolerate such wet-bulb temperatures because it is likely they never occurred as a normal part of weather throughout that time,” Romps said. “Global warming is changing that, and fast.”

Extreme heat is already the deadliest weather hazard in the U.S. Data from the Centers for Disease Control and Prevention (CDC) show that roughly 2,000 Americans die from heat-related causes per year, ABC News reports. Some experts believe the death toll is grossly underestimated. Understanding the limits of human survivability in a warmer world is literally a matter of life or death. There is an urgent need to adapt infrastructure, public health systems, and extreme heat response measures to the changing climate.

Source link
#Deadly #WetBulb #Temperatures #Smothering #Eastern #U.S


The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into ClaudeAnthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the PackUnlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.Other Captioning Glasses I TestedThere are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.Photograph: Christopher NullPhotograph: Christopher NullPhotograph: Christopher NullLeion’s Hey 2 is the price leader in this market, and even its prescription lenses ( to 9) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is  for 120 minutes,  for 1,200 minutes, and 0 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).Photograph: Christopher NullPhotograph: Christopher NullYou’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: /month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; /month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add 0 to 0.Photograph: Christopher NullPhotograph: Christopher NullAirCapsAirCaps Smart GlassesAirCaps does not make its own prescription lenses. Instead, you must purchase a pair of  “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ( for two) to one of the arms, which can provide 12 to 18 extra hours of juice.The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For /month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.Photograph: Christopher NullPhotograph: Christopher NullThe most expensive option on the market (up to ,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For /month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between  and 0.#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the Pack

Unlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

  • Photograph: Christopher Null

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Post Comment