The 9 top cybersecurity startups from Disrupt Startup BattlefieldĀ  | TechCrunch

The 9 top cybersecurity startups from Disrupt Startup BattlefieldĀ  | TechCrunch

Every year, TechCrunch’s Startup Battlefield pitch contest draws thousands of applicants. We whittle those applications down to the top 200 contenders, and of them, the top 20 compete on the big stage to become the winner, taking home the Startup Battlefield Cup and a cash prize of $100,000. But the remaining 180 startups all blew us away as well in their respective categories and compete in their own pitch competition.Ā 

Here is the full list of the cybersecurity Startup Battlefield 200 selectees, along with a note on why they landed in the competition.

AIM IntelligenceĀ 

What it does: AIM offers enterprise cybersecurity products that both protect against new AI-enabled attacks and use AI in that protection.Ā 

Why it’s noteworthy: AIM uses AI to conduct penetration tests of AI-optimized attacks and to protect corporate AI systems with customized guardrails, and it offers an AI safety planning tool.Ā 

CorgeaĀ 

What it does: Corgea is an AI-driven enterprise security product that can scan code for flaws as well as find broken code intended to implement security measures such as user authentication.Ā Ā 

Why it’s noteworthy: The product allows the creation of AI agents that can secure code and works with, it says, any popular language and their libraries.Ā 

CyDeployĀ 

What it does: CyDeploy offers a security product that automates asset discovery and mapping of all the apps and devices on a network.Ā Ā 

Techcrunch event

San Francisco
|
October 13-15, 2026

Why it’s noteworthy: Once the assets are mapped, the product creates digital twins to sandbox testing and allows security orgs to use AI to automate other security processes as well.Ā 

CyntegraĀ 

What it does: Cyntegra offers a hardware-plus-software solution that prevents ransomware attacks.Ā 

Why it’s noteworthy: By locking away a secure backup of the system, ransomware doesn’t win. It can restore the operating system, apps, data, and credentials in the minutes after an attack.Ā Ā 

HACKERverseĀ 

What it does: HACKERverse’s product deploys autonomous AI agents to implement known hacker attacks against a company’s defenses in ā€œisolated battlefield.ā€Ā 

Why it’s noteworthy: The tool tests and verifies that vendor security tools actually work as advertised.Ā Ā 

Mill Pond ResearchĀ 

What it does: Mill Pond detects and secures unmanaged AI.Ā 

Why it’s noteworthy: As employees adopt AI to assist them in their jobs, this tool can detect AI tools that are accessing sensitive data or otherwise creating potential security issues in the organization.Ā 

Polygraf AIĀ 

What it does: Polygraf AI offers small language models tuned for cybersecurity purposes.Ā 

Why it’s noteworthy: Enterprises use the Polygraf models to enforce compliance, protect data, detect unauthorized AI usage, and spot deepfakes, among other examples.Ā 

TruSourcesĀ 

What it does: TruSources can detect AI deepfakes, be they audio, video, images.Ā 

Why it’s noteworthy: This tech can work in real time for areas like identity authentication, age verification, andĀ identity fraud prevention.Ā 

ZEST SecurityĀ 

What it does: AI-powered enterprise security platform that helps infosec teams detect and solve cloud security issues.Ā 

Why it’s noteworthy: Zest helps teams speedily keep up with and mitigate known but unpatched security vulnerabilities and unifies vulnerability management across clouds and apps.

Source link
#top #cybersecurity #startups #Disrupt #Startup #Battlefield #TechCrunch


The newĀ Star Wars animated seriesĀ Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makesĀ Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-bestĀ edgelord in a character like Maul. But it turns out Shadow Lordā€˜s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’sĀ screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creatingĀ Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show doesĀ not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner forĀ Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The newĀ Star Wars animated seriesĀ Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makesĀ Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-bestĀ edgelord in a character like Maul. But it turns out Shadow Lordā€˜s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’sĀ screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creatingĀ Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show doesĀ not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner forĀ Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makesĀ Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-bestĀ edgelord in a character like Maul. But it turns out Shadow Lordā€˜s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’sĀ screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creatingĀ Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show doesĀ not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner forĀ Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s ScreamsMaul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The newĀ Star Wars animated seriesĀ Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makesĀ Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-bestĀ edgelord in a character like Maul. But it turns out Shadow Lordā€˜s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’sĀ screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creatingĀ Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show doesĀ not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner forĀ Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

The newĀ Star Wars animated seriesĀ Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makesĀ Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-bestĀ edgelord in a character like Maul. But it turns out Shadow Lordā€˜s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’sĀ screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creatingĀ Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show doesĀ not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner forĀ Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. ā€œThese chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,ā€ says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. ā€œI certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.ā€ She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

ā€œYou will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,ā€ says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. ā€œI think running into that without due diligence is dangerous.ā€ Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. ā€œThink of me as a med school professor, not your doctor,ā€ said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to ā€œdump the raw data,ā€ like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a ā€œreferral nudge if needed.ā€ In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

ā€œPeople have long used the internet to ask health questions,ā€ a Meta spokesperson tells WIRED. ā€œWith Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.ā€

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. ā€œA model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,ā€ says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible AdviceMedical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. ā€œThese chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,ā€ says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. ā€œI certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.ā€ She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.ā€œYou will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,ā€ says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. ā€œI think running into that without due diligence is dangerous.ā€ Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. ā€œThink of me as a med school professor, not your doctor,ā€ said Meta AI. That’s still a lofty claim.The bot said the best way to get an interpretation of my health data was just to ā€œdump the raw data,ā€ like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a ā€œreferral nudge if needed.ā€ In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.ā€œPeople have long used the internet to ask health questions,ā€ a Meta spokesperson tells WIRED. ā€œWith Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.ā€In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. ā€œA model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,ā€ says Agrawal.When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. ā€œI certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.ā€ She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

ā€œYou will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,ā€ says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. ā€œI think running into that without due diligence is dangerous.ā€ Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. ā€œThink of me as a med school professor, not your doctor,ā€ said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to ā€œdump the raw data,ā€ like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a ā€œreferral nudge if needed.ā€ In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

ā€œPeople have long used the internet to ask health questions,ā€ a Meta spokesperson tells WIRED. ā€œWith Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.ā€

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. ā€œA model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,ā€ says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. ā€œThese chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,ā€ says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. ā€œI certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.ā€ She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

ā€œYou will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,ā€ says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. ā€œI think running into that without due diligence is dangerous.ā€ Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. ā€œThink of me as a med school professor, not your doctor,ā€ said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to ā€œdump the raw data,ā€ like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a ā€œreferral nudge if needed.ā€ In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

ā€œPeople have long used the internet to ask health questions,ā€ a Meta spokesperson tells WIRED. ā€œWith Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.ā€

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. ā€œA model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,ā€ says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Post Comment