×
Ex-Googler’s Yoodli triples valuation to 0M+ with AI built to assist, not replace, people | TechCrunch

Ex-Googler’s Yoodli triples valuation to $300M+ with AI built to assist, not replace, people | TechCrunch

Yoodli, an AI-powered communication training startup, has reached a valuation of more than $300 million — more than triple its level six months ago — as it builds technology meant to assist people rather than replace them with machines.

The valuation increase follows Yoodli’s $40 million Series B round, led by WestBridge Capital with participation from Neotribe and Madrona. It comes after a $13.7 million Series A round announced in May, bringing the startup’s total funding to nearly $60 million.

As AI tools spread into workplaces and fuel fears of automation, Yoodli positions itself differently. The four-year-old, Seattle-based startup uses AI to run simulated scenarios — including sales calls, leadership coaching, interviews, and feedback sessions — and provides users with structured, repeatable practice to improve their speaking skills.

Varun Puri (pictured above, right), who previously worked at Google’s X division and handled special projects for Sergey Brin, co-founded Yoodli with former Apple engineer Esha Joshi (pictured above, left) in 2021. He became aware of communication challenges after moving to the U.S. at 18 and seeing how difficulty expressing ideas or speaking confidently affected students and young professionals from countries such as India — himself included — Puri said in an interview.

Initially, Yoodli was meant to help people practice public speaking — a skill two out of three people struggle with, Puri told TechCrunch, citing internal data. However, the startup soon saw users turning to the platform for interview preparation, sales pitches, and difficult conversations. That shift pushed Yoodli from a consumer-focused product to enterprise training, and it now offers AI role-plays and experiential learning tools for go-to-market enablement, partner certification, and management coaching.

Yoodli’s platformImage Credits:Yoodli

“In the old world, companies would be training people using static, long-form content or passive videos that we’d all watch at 4x-5x speed, just to get the thing done,” said Puri. “But that doesn’t actually mean you’ve learned it.”

Companies including Google, Snowflake, Databricks, RingCentral, and Sandler Sales use Yoodli for employee or partner training. The startup also sells its platform to coaching firms such as Franklin Covey and LHH, which can tailor the system to their own methodology and training frameworks, Puri stated. He added that the tool is not designed to replace human coaches but to keep a human in the loop delivering personalized guidance.

Techcrunch event

San Francisco
|
October 13-15, 2026

“I philosophically believe that AI can get you, let’s call it from a zero to an eight or a zero to nine,” said Puri. “But the pure essence of who you are and how you show up, and your authenticity and vulnerability that a human gives you feedback on will always exist.”

The platform works with multiple large language models, meaning users can run it with models such as Google’s Gemini or OpenAI’s GPT based on their preference. Enterprises can also embed it into their existing software, or users can access it directly through a web browser. The AI supports most major languages, including Korean, Japanese, French, Canadian French, and a list of Indian languages.

Yoodli does not offer a dedicated mobile app, a decision Puri said was made to avoid adding extra steps for users during training sessions.

Yoodli team
Yoodli’s teamImage Credits:Yoodli

Puri did not disclose how many people use the platform but said most of Yoodli’s revenue now comes from enterprise customers. He added that between the Series A and B rounds, Yoodli saw a 50% increase in the number of role-plays run on the platform and in the total time users spent practicing. The startup also said it grew its average recurring revenue by 900% over the last 12 months, though it did not provide specific figures.

Yoodli had not planned to raise more funding so soon after its last round but saw unanticipated investor interest, with WestBridge leading the latest raise, Puri said. He noted that strong performance metrics, key customers, and senior hires helped attract investors. The startup has recently hired former Tableau and Salesforce executive Josh Vitello as chief revenue officer (CRO), former Remitly CFO Andy Larson as CFO, and former Tableau chief product officer (CPO) Padmashree Koneti as CPO.

Yoodli is not alone in the market for AI-based communication tools, but Puri told TechCrunch the startup differentiates itself through deep customization and a focus on specific training verticals, allowing companies to tailor the system to their use cases and coaching methods.

The Seattle-headquartered startup has about 40 employees. Puri said the latest funding will be used to expand Yoodli’s AI coaching, analytics, and personalization tools, and to grow its presence in enterprise learning and professional development. The company also plans to hire across product, AI research, and customer success, and to expand into markets in the Asia-Pacific region while deepening its footprint in the U.S.

Source link
#ExGooglers #Yoodli #triples #valuation #300M #built #assist #replace #people #TechCrunch


The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion">Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into ClaudeAnthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude
                The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go. Anthropic seems to be doing something along these lines with Claude. Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

 Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups. It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

 The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

 There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.” To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

 They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them. Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.      #Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The original mysterious black box wasn’t an AI model at all, but the Kaaba, the black cube at the center of the Sacred Mosque of Mecca. Prior to Muhammad’s conquest of Mecca, the Kaaba was a sort of all-purpose repository of 360 sacred symbols from around the region. If you were, say, a busy merchant on his way to Medina, whatever the great spiritual truths of the universe may be, they were in there somewhere, so a prayer to the Kaaba had you covered in the god department and you were good to go.

Anthropic seems to be doing something along these lines with Claude.

Last week, representatives from Anthropic—along with OpenAI—attended an event in New York called the “Faith-AI Covenant” roundtable. The New York Board of Rabbis, the Hindu Temple Society of North America, the Church of Jesus Christ of Latter-day Saints, the U.S.-based Sikh Coalition, and the Greek Orthodox Archdiocese of America were all in attendance.

Last month, I wrote about a series of meetings and dinners Anthropic organized with a collection of 15 Christian leaders. Anthropic was looking for advice from the Christians, and guidance on the supposed “spiritual development” of its Claude AI model. At the time Anthropic said it was working on arranging meetings with moral thinkers who represented other groups.

It’s not clear from a fresh Associated Press piece about the Faith-AI Covenant meeting whether these latest conversations with religious leaders and the earlier meetings with Christians were part of a single coherent program at Anthropic, and whether the staff members who participated in the Christian summit participated in this one as well. Gizmodo asked Anthropic for clarity about this on Saturday, but Anthropic did not return our request as of this writing.

The Associated Press also says OpenAI and Anthropic “initiated outreach,” but also that a Swiss NGO called the Interfaith Alliance for Safer Communities organized it, and has plans for future events along similar lines in China, Kenya and the United Arab Emirates. Also mentioned as a “key partner” was Baroness Joanna Shields, a member of the British House of Lords.

There’s not a single clear takeaway in the AP story—no religious instructions laid out by all these spiritual leaders. But what Anthropic calls Claude’s constitution includes a dissection of the philosophically fraught moral work Anthropic is at least trying to do by injecting morals into a machine: getting it to make the decision of a person with perfect values when there’s no way to write a rule for a situation that arises, and the consequences of making the wrong decision could be dire. This, Anthropic writes, is “centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

To this end, the Associated Press story extracts some quietly devastating commentary from Rumman Chowdhury, CEO of a nonprofit called Humane Intelligence: “I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics,” Chowdhury told the AP, adding, “They have very quickly realized that that’s just not true. That’s not real. So now they’re looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.”

They are indeed looking at maybe religion. But it’s hard to picture Anthropic coming away from these meetings converted, and inserting one set of specific religious doctrines into Claude. They’re just trying to glean high order ethical truths, and demonstrating to the world that they’ve—ostensibly—left no stone unturned in searching for them.

Your mileage will vary on whether you think a machine charged with making decisions or giving important advice would, when the chips are down, be able to synthesize ideal morals thanks to meetings its creators held with administrators from some of humanity’s premier religions. It probably can’t hurt, sorta like nodding at the pre-Islamic Kaaba. But then again, only God knows for sure.

#Anthropic #Added #Religions #Quest #Inject #Perfect #Morals #ClaudeArtificial intelligence,religion

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the PackUnlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.Other Captioning Glasses I TestedThere are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.Photograph: Christopher NullPhotograph: Christopher NullPhotograph: Christopher NullLeion’s Hey 2 is the price leader in this market, and even its prescription lenses ( to 9) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is  for 120 minutes,  for 1,200 minutes, and 0 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).Photograph: Christopher NullPhotograph: Christopher NullYou’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: /month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; /month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add 0 to 0.Photograph: Christopher NullPhotograph: Christopher NullAirCapsAirCaps Smart GlassesAirCaps does not make its own prescription lenses. Instead, you must purchase a pair of  “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ( for two) to one of the arms, which can provide 12 to 18 extra hours of juice.The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For /month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.Photograph: Christopher NullPhotograph: Christopher NullThe most expensive option on the market (up to ,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For /month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between  and 0.#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility">I Tried the Best Captioning Smart Glasses, and Only One Leads the Pack

Unlike the other glasses I tested, Even doesn’t sell a subscription plan; everything’s included out of the box.

The only downside I could find with the G2 is that it is largely devoid of offline features, so the glasses have to be connected to the internet to do much of anything. Considering the G2’s capabilities, it’s a trade-off I am more than happy to make.

Other Captioning Glasses I Tested

There are plenty of capable captioning eyeglasses on the market, but they are surprisingly similar in both looks and features. While many are quite capable, none had the combination of power and affordability that I got with Even’s G2. Here’s a rundown of everything else I tested.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

  • Photograph: Christopher Null

Leion’s Hey 2 is the price leader in this market, and even its prescription lenses ($90 to $299) are pretty affordable. The hardware, however, is heavy: 50 grams without lenses, 60 grams with them. A full charge gets you six to eight hours of operation; the case adds juice for up to 12 recharges.

I like the Leion interface, which lays out caption, translation, “free talk” (two-way translation), and a teleprompter feature on its clean app. You get access to nine languages; using Pro minutes expands that to 143. Leion sells its premium plan by the minute, not the month, so you need to remember to toggle this mode off when you don’t need it. Pricing is $10 for 120 minutes, $50 for 1,200 minutes, and $200 for 6,000 minutes. There’s no offline use supported, and I often struggled to get AI summaries to show up in English instead of Chinese (regardless of the recorded language).

  • Photograph: Christopher Null

  • Photograph: Christopher Null

You’re not seeing double: XRAI and Leion use the same manufacturer for their hardware, and the glasses weigh the same. The battery spec is also similar, with up to eight hours on the frames and another 96 hours when recharging with the case. XRAI claims its display is significantly brighter than competitors’, but I didn’t see much of a difference in day-to-day use.

The features and user experience are roughly the same, though Leion’s teleprompter feature isn’t implemented in XRAI’s app, and it doesn’t offer AI summaries of conversations. I also didn’t find XRAI’s app as user-friendly as Leion’s version, particularly when trying to switch among the admittedly exhaustive 300 language options. Only 20 of these are included without ponying up for a Pro subscription, which is sold both by the month and minute: $20/month gets you a max of 600 upgraded transcription minutes and 300 translation minutes; $40/month gets you 1,800 and 1,200 minutes, respectively. On the plus side, XRAI does have a rudimentary offline mode that works better than most. For prescription lenses, add $140 to $170.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

AirCaps

AirCaps Smart Glasses

AirCaps does not make its own prescription lenses. Instead, you must purchase a pair of $39 “lens holders” and take them to an optician if you want prescription inserts. I was unable to test these with prescription lenses and ultimately had to try them out over my regular glasses, which worked well enough for short-term testing. Frames weigh a hefty 53 grams without add-on lenses; the company couldn’t tell me how much extra weight prescription lenses would add to that, but it’s safe to say these are the bulkiest and heaviest captioning glasses on the market. Despite the weight, they only carry two to four hours of battery life, with 10 or so recharges packed into the comically large case. Another option is to clip one of AirCaps’ rechargeable 13-gram Power Capsules ($79 for two) to one of the arms, which can provide 12 to 18 extra hours of juice.

The AirCaps feature list and interface make it perhaps the simplest of all these devices, with just a single button to start and stop recording. Transcriptions and translations are available for free in nine languages. For $20/month, you can add the Pro package, which offers better accuracy, access to more than 60 languages, and the option to generate AI summaries on demand (though only if recordings are long enough). As a bonus: Five hours of Pro features are free each month. Offline mode works pretty well, too. The only bad news is that these bulky frames just aren’t comfortable enough for long-term wear.

  • Photograph: Christopher Null

  • Photograph: Christopher Null

The most expensive option on the market (up to $1,399 with prescription lenses!) weighs a relatively svelte 40 grams (52 grams with lenses) and offers about four hours of battery life. There’s no charging case; the glasses must be charged directly using the included USB-connected dongle.

The glasses are extremely simple, offering transcription and translation features—with support for about 80 languages, which is impressive. I unfortunately found the prescription lenses Captify sent to be the blurriest of the bunch, making the captions comparatively hard to read. And while the device supports offline transcription, performance suffered badly when disconnected from the internet. I couldn’t get translations to work at all when offline. For $15/month, you get better accuracy and speaker differentiation, and access to AI summaries of conversations. Prescription lenses cost between $99 and $600.

#Captioning #Smart #Glasses #Leads #Packbuying guides,shopping,smart glasses,eyewear,health,augmented reality,accessibility

Post Comment