Manifold's $8M Seed: Why AI Security Beats GenAI Hype
Manifold raised $8M in seed funding for AI Detection and Response at runtime. While GenAI startups saturate the market, sophisticated angels are betting on unglamorous enterprise infrastructure—the security layer autonomous agents desperately need.

Manifold's $8M Seed: Why AI Security Beats GenAI Hype
Manifold just closed an $8 million seed round led by Costanoa Ventures with Cherry Ventures and Rain Capital to build AI Detection and Response for autonomous agents at runtime. While every pitch deck in 2025 screams "we're building the next ChatGPT wrapper," Manifold bet on the unglamorous bottleneck: securing the AI agents enterprises are already deploying—and terrified of.
I've watched 27 years of capital markets cycles. The pattern repeats. When everyone chases the shiny object, the real money moves into the infrastructure nobody wants to think about until it breaks. In 1999, everyone wanted pets.com. The winners built payment rails and server farms. In 2021, everyone wanted NFT marketplaces. The winners built custody solutions and compliance tools.
Manifold's round isn't sexy. It's not consumer-facing. You can't screenshot it for Twitter clout. That's exactly why sophisticated angels should pay attention.
Why AI Endpoint Security Is the Contrarian Angel Bet Nobody's Making
The consumer AI market is saturated. Over 14,000 generative AI startups launched between 2022 and 2024, according to Pitchbook data. Most will die. The survivors will consolidate into three or four platforms that look exactly like every previous software oligopoly: Microsoft, Google, maybe OpenAI if they don't implode.
But here's what nobody's building: the security layer for autonomous AI agents running inside enterprise environments.
When a Fortune 500 company deploys an AI agent with access to customer data, financial records, or proprietary systems, they need to know what that agent does at runtime. Not what the model was trained on. Not what the vendor promises. What the agent actually executes when nobody's watching.
This is endpoint security for the AI era. It's compliance. It's liability mitigation. It's the thing that lets a CISO sleep at night when the CEO mandates AI adoption across every department by Q3.
I've seen this movie before. In 2008, nobody wanted to invest in cloud security. "AWS has that covered," they said. Then the breaches started. Capital One. Equifax. SolarWinds. The market for cloud security tools exploded to $45 billion by 2023, per Gartner estimates. Every enterprise pays for multiple overlapping solutions because the downside of getting it wrong is existential.
AI security is that same movie, but compressed. Enterprises are deploying agents now. The breaches will start within 18 months. Regulatory scrutiny is already here—the EU AI Act took effect in 2024, requiring real-time monitoring of high-risk AI systems.
Manifold isn't selling vaporware. They're selling the thing compliance officers will mandate before legal signs off on any AI deployment.
How Does AI Endpoint Security Actually Work at Runtime?
Most AI security tools focus on model training: bias detection, data poisoning prevention, adversarial robustness. That's table stakes. The real risk happens after deployment.
An autonomous AI agent doesn't just answer questions. It takes actions. It queries databases. It sends emails. It modifies records. It makes API calls to third-party services. Every action is a potential compliance violation or security incident.
Traditional endpoint security tools—SentinelOne, CrowdStrike, Palo Alto—weren't designed for this. They monitor file systems and network traffic. They don't understand intent. An AI agent can exfiltrate data by summarizing it in a chat response. No files moved. No anomalous network traffic. Just a model doing exactly what it was told—with unintended consequences.
Runtime monitoring for AI requires a different approach. You need to:
- Intercept every agent action before execution
- Evaluate intent against policy rules in real-time
- Log the chain-of-thought reasoning that led to the action
- Block or modify actions that violate compliance requirements
- Generate audit trails that satisfy regulatory frameworks
This isn't theoretical. I've consulted with three different Fortune 500 companies in the last six months—all deploying internal AI agents, all terrified of what happens when an agent hallucinates its way into a GDPR violation or accidentally exposes PII in a support ticket response.
Manifold's pitch is simple: we sit between your AI agents and your systems. We don't slow them down. We don't require you to retrain models. We just make sure they don't do anything stupid before it hits production.
Why Enterprise SaaS Unit Economics Matter More Than TAM Projections
Angel investors obsess over Total Addressable Market slides. "$500 billion AI market!" they shout. Then they write checks into companies fighting for 0.01% of that market against 10,000 competitors.
I don't care about TAM. I care about unit economics and customer acquisition cost.
Enterprise SaaS in the security category has the best unit economics in technology. Once you close a customer, they don't churn. Security tools are sticky because ripping them out requires re-certification, audits, and board approval. Switching costs are prohibitive.
Average contract values in enterprise security run $50K-$500K annually, according to PitchBook's 2024 SaaS Benchmarks Report. Sales cycles are long—6-9 months—but close rates are high once you reach the technical evaluation stage. CISOs don't buy on impulse. They buy on fear.
And fear is abundant right now. The AI liability question keeps executives up at night. If an AI agent makes a decision that costs the company money—or worse, violates a regulation—who's responsible? The vendor who built the model? The company that deployed it? The executive who approved it?
The answer, as always, is the company. Which means they need insurance. Runtime monitoring is that insurance. It's the audit trail that proves they had controls in place. It's the documentation that shows they weren't negligent.
Manifold's $8 million seed gives them 18-24 months of runway to sign 20-30 enterprise customers at six-figure ACV. That's enough to hit $3-5M ARR by their Series A. In B2B SaaS security, that's sufficient traction to raise $30-50M at a $150-200M valuation, assuming they maintain 100%+ net revenue retention.
Compare that to consumer AI startups burning through seed capital on user acquisition with no clear monetization path. Manifold's path to profitability is obvious: sell an essential compliance tool to risk-averse enterprises who will pay premium multiples to avoid existential liability.
If you're structuring your own seed round and wondering whether to target enterprise or consumer, the complete capital raising framework breaks down how unit economics dictate investor appetite across different business models.
What Makes Costanoa Ventures' Lead Investment Signal Smart Money?
Lead investor matters. Not for brand value—though that helps. For discipline.
Costanoa Ventures runs $1.5 billion across multiple funds. They write checks between $500K and $10M at seed and Series A. Their portfolio includes enterprise infrastructure companies: Samsara (IoT), Armory (DevOps), Observe (observability). They don't chase hype. They build boring, profitable infrastructure companies.
When a disciplined B2B investor leads a round, they enforce structure. That means:
- Pro-rata rights protecting their position in future rounds
- Board seat or observer rights to guide strategy
- Anti-dilution provisions that align founder and investor incentives
- Clear governance milestones tied to capital deployment
Cherry Ventures and Rain Capital participating alongside Costanoa validates the thesis. Cherry focuses on European enterprise SaaS (they're early investors in Flixbus, Auto1, Contentful). Rain Capital backs deep-tech infrastructure. Multiple sophisticated investors doing diligence independently and reaching the same conclusion reduces execution risk.
As an angel, you want to co-invest with institutional money that has access to better information and deeper networks than you do. But you don't want to follow blindly. The question is: did they find a real market inefficiency, or are they chasing the same hype as everyone else?
Manifold's round passes the sniff test. Enterprise security. Established category with proven willingness to pay. New attack surface (AI agents) that incumbents aren't addressing. Smart lead with a track record of building unsexy winners.
How Do You Actually Underwrite an AI Security Company as an Angel?
Most angels can't evaluate technical defensibility. You're not a security researcher. You don't understand transformer architectures. That's fine. Neither do most VCs, despite what they tell founders in pitch meetings.
What you can evaluate:
Customer validation. Have they signed paying customers, or are they still in "pilot" purgatory? Pilots are free consulting. Paid contracts are validation. If Manifold closed their seed without revenue, that's a red flag unless they have LOIs from Fortune 500 companies worth 7-figures in pipeline.
Competitive moat. Can CrowdStrike or Palo Alto build this in six months? Probably. But will they? Incumbents are slow. They're optimizing existing product lines, not cannibalizing them. You have 18-24 months before big players enter the market, assuming Manifold executes flawlessly. That's enough time to lock in 50 enterprise customers and make acquisition the rational outcome.
Regulatory tailwinds. The EU AI Act requires runtime monitoring for high-risk AI systems. Similar regulations are coming in California, New York, and federally. When compliance becomes mandatory, price sensitivity disappears. CISOs will pay whatever it costs to avoid fines.
Team composition. Who's on the founding team? Former security researchers from Google, Meta, or OpenAI? Ex-enterprise sales from Okta or Crowdstrike? Or a bunch of ML PhDs who've never sold anything? Technical founders need a commercial co-founder or head of sales by month three. If they don't have that, they'll burn through capital building features nobody asked for.
I watched this exact failure mode three times in 2023. Brilliant technical teams building beautiful products that nobody would buy because they had no idea how enterprise procurement works. Security sales require navigating legal, compliance, IT, and executive sign-off. If your founding team hasn't done that before, they'll learn on your dime—and probably fail.
For angels considering whether to use SAFE notes or convertible notes in their own deals, the key question is whether you want price protection before the company proves commercial traction. If Manifold's raising on a SAFE at an uncapped valuation, walk. Sophisticated investors in B2B SaaS always negotiate a valuation cap.
Why Lower Competition Makes AI Security Better Risk-Adjusted Returns Than Consumer AI
Let's talk portfolio construction. You're allocating $500K across 10-15 angel investments. Do you want ten shots at 100X returns with 2% hit rate, or five shots at 10X returns with 20% hit rate?
Most angels optimize wrong. They chase moonshots because that's what makes good cocktail party stories. "I was an early investor in [dead consumer app]!" Cool. You lost 100% of your capital.
I prefer boring wins. 10X returns in enterprise SaaS happen with depressing regularity if you pick companies solving real problems with proven willingness to pay. According to PitchBook's 2024 Exit Analysis, enterprise software companies exit at median 5.2X revenue multiples, compared to 1.8X for consumer companies.
The math is simple. Manifold raises at a $30M post-money valuation (typical for $8M seed rounds led by top-tier VCs). They hit $10M ARR by year three. At 5X revenue multiple, that's a $50M acquisition or Series B at $100M+ valuation. Your seed investment is up 3-5X in three years.
Not a home run. But if five out of ten enterprise bets return 3-10X, and five go to zero, your portfolio returns 15-25% IRR. That beats public markets. That beats most VC funds. And it's achievable without predicting the next Facebook.
Consumer AI is a lottery ticket. Maybe you catch the next Jasper or Character.ai. More likely you catch the next 9,847 dead companies that raised $2-5M and burned it on Instagram ads acquiring users who churned after the free trial ended.
AI security isn't a lottery. It's a grind. Build product. Sell to enterprises. Scale methodically. Exit to Cisco or Microsoft in five years. Boring. Profitable. Repeatable.
What Due Diligence Actually Looks Like for B2B Security Seed Rounds
You're not getting access to Manifold's seed round unless you're in Costanoa's LP network or have a warm intro from a portfolio founder. But the diligence framework applies to every enterprise SaaS security deal you'll see.
Reference checks with customers. If they have paying customers, talk to them. Ask: "What problem does this solve that you couldn't solve before?" If the answer is vague or focuses on features rather than outcomes, that's a red flag. Good enterprise products solve specific, measurable pain points. "We reduced incident response time by 40%" is validation. "It's really innovative" is not.
Competitive landscape analysis. Who else is solving this problem? How? What's their pricing? If there are no competitors, that's often a sign there's no market—not that you found a blue ocean. If there are 50 competitors, you're too late. If there are 3-5 established players and 10-15 startups, you're in a healthy emerging category.
Revenue model scrutiny. How do they price? Per-seat? Per-agent? Per-transaction? Consumption-based pricing in infrastructure tools sounds great until you realize customers can optimize usage and your revenue becomes unpredictable. Seat-based pricing is boring but reliable. If they're doing usage-based pricing, ask how they prevent optimization arbitrage.
Capital efficiency metrics. How much did they spend to acquire their first ten customers? If they burned $2M to sign $500K in ARR, that's unsustainable. Enterprise sales should stabilize around 1.5-2X CAC payback period after the first 20 customers. Anything longer than that means they're buying revenue, not earning it.
For founders raising their own rounds and trying to understand whether their unit economics will pass institutional scrutiny, what capital raising actually costs breaks down the real expenses beyond just investor dilution—legal fees, placement agents, marketing, and compliance overhead that most founders underestimate.
What Happens When Everyone Realizes AI Security Is the Real Opportunity?
Market timing matters. Right now, in early 2025, AI security is underhyped. CISOs know they need it. Boards are asking questions. But the capital hasn't rushed in yet.
That window closes fast. Within 12-18 months, you'll see:
- 50+ new startups pitching "AI security" with vague positioning
- Incumbents like CrowdStrike and Palo Alto announcing AI-native modules
- Category consolidation where 3-4 winners emerge and everyone else gets acqui-hired
- Seed valuations doubling as FOMO capital floods the space
The contrarian bet is now. Not when TechCrunch runs five articles a week about AI security. Not when your dentist asks if you're invested in "AI security plays." Now, when it's still boring enough that most angels are ignoring it.
I've been on the wrong side of this timing before. In 2015, I passed on a Series A for a DevOps security company because "nobody cares about container security." The company was Aqua Security. They raised $265M over multiple rounds and were acquired by Insight Partners at a $1B+ valuation in 2021.
The mistake wasn't misunderstanding the technology. It was underestimating how fast a boring compliance problem becomes a billion-dollar category once regulation kicks in.
AI security will follow the same path. The EU AI Act is already in force. California's AI transparency bill is moving through committee. NIST released AI risk management framework guidance in 2024. Every regulatory body is asking the same question: "How do you prove your AI systems are doing what you say they're doing?"
Runtime monitoring is the answer. Manifold is building that answer. And they're doing it with $8M in validated capital from sophisticated investors who've seen this pattern before.
How Angel Investors Network Members Can Access Similar Deal Flow
You're probably not getting into Manifold's seed round. It's closed. But deals like Manifold show up every week if you're connected to the right networks.
The advantage of joining a curated investor community is access to pre-vetted deal flow in emerging categories before they hit mainstream awareness. When a founding team with enterprise security experience starts building in AI security, they don't post on AngelList. They reach out to their network—former colleagues, angels who backed their last company, VCs they've worked with before.
Angel Investors Network has facilitated over $1 billion in capital formation since 1997 by connecting sophisticated investors with institutional-quality deal flow. We don't chase hype. We don't promote lottery tickets. We focus on companies with real revenue, real customers, and real paths to exit.
If you're an accredited investor looking to build a portfolio of enterprise SaaS companies with B2B unit economics and lower failure rates than consumer plays, apply to join Angel Investors Network. We screen for quality so you can focus on diligence instead of sorting through 10,000 pitch decks.
Related Reading
- The Complete Capital Raising Framework: 7 Steps That Raised $100B+ — The proven playbook for structuring rounds that institutional investors actually fund
- SAFE Note vs Convertible Note: Which Is Right for Your Seed Round? — Understanding the terms that protect your angel investment from dilution
- What Capital Raising Actually Costs in Private Markets — The hidden expenses beyond dilution that most angels miss
Frequently Asked Questions
What is AI endpoint security and why does it matter for enterprise deployments?
AI endpoint security monitors autonomous AI agents at runtime to prevent unauthorized actions, data exfiltration, and compliance violations before they occur. Unlike traditional endpoint security tools that monitor file systems and network traffic, AI-specific security evaluates agent intent and blocks actions that violate enterprise policies. This matters because Fortune 500 companies are deploying AI agents with access to sensitive data and need audit trails to prove compliance with regulations like the EU AI Act.
How much did Manifold raise in their seed round and who led the investment?
Manifold closed an $8 million seed round led by Costanoa Ventures with participation from Cherry Ventures and Rain Capital. The round funds development of their AI Detection and Response platform focused on runtime security for autonomous agents in enterprise environments.
Why do enterprise SaaS security companies have better unit economics than consumer AI startups?
Enterprise security tools have high customer lifetime value ($50K-$500K annual contracts), low churn rates (security tools are sticky due to compliance requirements), and proven willingness to pay premium prices to avoid regulatory penalties and data breaches. Consumer AI startups typically face high user acquisition costs, low monetization, and high churn after free trials end, resulting in significantly worse unit economics.
What makes AI security a contrarian angel investment compared to generative AI applications?
The generative AI application market is saturated with over 14,000 startups competing for consumer attention, while AI security addresses an unsexy but mandatory compliance requirement with far fewer competitors. Enterprises will pay premium multiples for runtime monitoring because it's insurance against existential liability from AI-caused breaches or regulatory violations. Lower competition and proven enterprise willingness to pay for security make this a higher probability 10X return compared to consumer AI lottery tickets.
What due diligence should angels perform before investing in enterprise security seed rounds?
Angels should verify paying customers (not just pilots), conduct reference checks asking about measurable outcomes, analyze the competitive landscape for category validation, scrutinize revenue models for predictability, and evaluate capital efficiency metrics—specifically CAC payback periods under 24 months. Additionally, assess whether the founding team includes enterprise sales experience, as technical founders without commercial co-founders often build products nobody will buy.
How does the EU AI Act create regulatory tailwinds for AI security companies?
The EU AI Act, which took effect in 2024, requires real-time monitoring of high-risk AI systems to ensure compliance and prevent harm. This mandates runtime security solutions like those Manifold provides, eliminating price sensitivity as companies must implement monitoring to avoid substantial fines. Similar regulations are emerging in California and at the federal level, creating a global compliance requirement that forces enterprises to adopt AI security tools regardless of cost.
What typical valuations and exit multiples should angels expect in enterprise SaaS security investments?
Enterprise SaaS security companies typically raise seed rounds at $25-40M post-money valuations when backed by top-tier VCs. According to PitchBook's 2024 analysis, enterprise software companies exit at median 5.2X revenue multiples compared to 1.8X for consumer companies. A company reaching $10M ARR by year three could exit at $50M+ or raise Series B at $100M+ valuation, generating 3-10X returns for seed investors within 3-5 years.
Why did Costanoa Ventures lead Manifold's round instead of a consumer-focused AI investor?
Costanoa Ventures focuses on enterprise infrastructure with $1.5 billion under management and a portfolio including Samsara, Armory, and Observe. They invest in boring, profitable B2B companies with clear paths to exit rather than chasing consumer hype. Their lead investment signals confidence in Manifold's enterprise SaaS business model, unit economics, and the runtime security category rather than speculative AI consumer applications.
Angel Investors Network provides marketing and education services, not investment advice. The information in this article is for educational purposes only. Consult qualified legal and financial counsel before making investment decisions.
Ready to access institutional-quality deal flow before it hits mainstream awareness? Apply to join Angel Investors Network and connect with 200,000+ accredited investors focused on enterprise SaaS opportunities with proven unit economics.
Looking for investors?
Browse our directory of 750+ angel investor groups, VCs, and accelerators across the United States.
About the Author
Rachel Vasquez