Stay up-to-date on our team's latest theses and research
Today’s security giants have all become data platforms: from the traditional SIEMs (Security Information and Event Management) like Splunk, to CSPMs (Cloud Security Posture Management) like Wiz, to network security companies like Palo Alto Networks and endpoint security companies like Crowdstrike.
They are deeply entrenched with enterprises, and seem nearly impossible to rip out.
But they were designed for human analysts, and often don’t do that job particularly well: many enterprises complain about paying millions to store security data they can barely use.
Meanwhile, cybersecurity is in an escalating arms race of autonomy. Attackers are using AI agents to infiltrate systems at an unthinkable scale and pace. To counter, defenders are replacing slow, manual workflows with AI agents to triage alerts, hunt threats, and close vulnerabilities before they’re exploited.
Operating at agentic scale will push security data infrastructure to its breaking point. But these platforms aren’t easy to change. They are large, complex, mission-critical systems.
So what will happen to these data giants? Their moats are strong for now, but we think AI security agents will be a trojan horse for a dramatic platform shift.
Security teams already struggle with their data platforms today. Data volumes and costs are ballooning; enterprises often see their security data volumes grow 30-40% yearly. And making use of that data is hard: analysts write queries across multiple complex systems and languages, wait minutes to hours for them to run, then wrangle results in spreadsheets. It’s hard to figure out what’s already happened, let alone develop proactive detections and defenses.
We already see this being transformed by AI. Security agents (like Theory portfolio companies Dropzone in security operations and Maze in vulnerability management) will have superhuman knowledge of every platform in their domain. They’ll know every query language and schema quirk. They can wrangle data like the best analyst, and reason about attack patterns like an expert security researcher.
The result? Simple natural-language search and analysis; smarter, context-informed behavioral detections; and a rapid shift from a small number of human-led analyses to a massive amount of queries executed by AI agents.
If you were building an AI security agent that queries massive volumes of data 24/7, should you sit on top of the existing data stack or build your own?
There are strong arguments on both sides:
Sitting on top of the existing stack (sometimes called a federated or overlay model) is compelling because it’s easy. Customers don’t need to worry about a migration. They can keep using their existing tools and analysts don’t need retraining. AI systems will be able to deliver value practically from day one.
An objection to this model is often cost. If a company already spends millions on their core security data platforms, can they justify spending millions more on agents just to make that data useful? It can be a tough pitch for a CISO to make to their CFO, though AI’s ability to improve security posture and automate labor is undeniable.
Building your own integrated/consolidated stack is the best way to make AI systems work well. It’s easier for agents to make use of data with clean, consistent, and normalized schemas. You don’t need to worry about maintaining a suite of connectors and integrations. You can drive better cost and performance on a cheap, modern database like ClickHouse. And you can build more intelligent systems, like deciding how to ingest, transform, and retain data based on downstream AI agent needs.
But this approach comes with massive switching costs. For most enterprises, a security data migration is slow, expensive, and risky. For some, it is practically impossible due to regulatory requirements or legacy/on-prem infrastructure. Despite the benefits of an integrated platform, the prospect of a long, costly migration to realize them can be insurmountable.
Despite the benefits of an integrated data stack for AI, high switching costs means the security data giants are safe for now. But the way they could be unseated is clear.
AI-native entrants can enter as an overlay, integrating with existing data platforms and running analysis on top. It’s more work for the startup: they have to handle messy legacy data sources, maintain integrations, and demonstrate hard ROI (e.g. from time/labor savings). But it lets them provide value to customers instantly with minimal implementation risk.
These AI products will abstract away underlying platforms as agents take over most data interactions. When their contract for an underlying data platform is up, customers will wonder if they really need the expensive legacy infrastructure, when the AI system on top of it can provide a cheaper, more performant option using raw logs, commodity storage, and open-source databases.
This path will break the stranglehold of today’s security giants, and shift value from the data layer to the AI layer over time. It creates a generational opportunity to build massive, foundational new security companies that will rival today’s behemoths.
If you have thoughts on the evolution of security data platforms, I’d love to hear from you: at@theoryvc.com.
A blow to the head from a surfing incident left me couch-ridden over the holidays. For someone who struggles with taking time off, the forced sedentary lifestyle was a challenge. Naturally, my brother and I became prediction market fanatics. With news and sports now one of my primary information flows, the order book became our competitive arena.
We traded everything: from geopolitical tail risks like the probability of a U.S. invasion of Venezuela (a short position that paid out at the end of the year – just barely), to the NBA. We eventually landed on a strategy that 12x’d our account over 10 days, primarily by identifying mispriced underdogs where the order book skewed too heavily toward favorites. You could calculate the deviation between the Vegas lines and the prediction market books pretty easily. For instance we took the Timberwolves against OKC at nearly 10:1 odds. If you’ve seen Anthony Edwards play recently, you know the man is a monster; it was a fundamental mispricing. Last, we looked at arbs across platforms with some opportunities for almost 10% “risk free” profits, ignoring settlement risks.
But as we watched the liquidity move and risk adjust in real-time, the cracks in the current infrastructure became obvious. While the wisdom of the crowds is a poetic concept, the current reality of prediction markets faces a structural ceiling.
The primary hurdle for prediction markets today is a lack of deep liquidity. Liquidity is the bridge between a theoretical price and a tradable one; it ensures that a high-conviction trade reveals new information to the market rather than simply breaking the order book. If a prediction market is thin (low liquidity), it's just a place where a few people are guessing. For it to become a real financial tool, it needs to be deep (high liquidity) so that big players can move millions of dollars without accidentally breaking the price. Liquidity and execution reliability are the crux of trading infrastructure. The inability for Market Makers (MMs) to update spreads quickly in deep orderbooks causes one to be picked off. When MMs can't hedge, or price accurately, liquidity vanishes.
Today, most of the liquidity is concentrated in sports. This is because the Designated Contract Market (DCM) license (only 20 issued total) allows people to trade on sports in states where sports betting is illegal.
To move beyond sports betting, these markets must solve the Toxic Flow problem.
Prediction markets inherently encourage participants with inside information to place asymmetric bets. In a traditional equity market, insider trading is a crime; in a prediction market, it is a mechanism for price discovery. This creates a hostile environment for MMs. If an MM knows they are constantly being run over by insiders, they widen their spreads or leave the book entirely. This is known as the adverse selection problem.
In addition, one of the most significant barriers to professionalizing prediction markets is the lack of capital efficiency. Currently, most prediction markets are fully collateralized (1:1 margin). If you want to bet \$100, you must put up \$100.
In the world of professional trading, this is an anomaly. In traditional derivatives, like Perpetual Swaps (Perps) or Futures, traders utilize leverage without tying up their entire balance sheet. Without leverage, you cap the potential Return on Equity (ROE) necessary to prioritize these markets. Because there is no margin, unwinding a position can be more difficult. In a fully collateralized market, you can't simply 'net' your way out of a losing position; you are forced to hold your high-cost, binary bet to the bitter end unless you find a new buyer willing to pay the full face value upfront. This capital lock-up is a non-starter for high-frequency market makers who need to recycle capital every millisecond.
If prediction markets are to become a multi-trillion dollar asset class, they must evolve into something resembling Lloyd’s of London – a marketplace where specialized groups compete to underwrite unique, high-stakes risks.
The value proposition isn't just knowing who will win an election; it’s the ability to represent and trade risk that was previously unquantifiable specific to your financial situation. Imagine a world where:
The price signal provided to the rest of the world might be a prediction market's greatest utility. By allowing corporations to hedge specific existential risks, prediction markets move from a retail speculators tool to a fundamental piece of financial infrastructure.
There is, however, a final boss in this evolution: Counterparty Risk.
An old colleague of mine, who built the derivatives desk at a major bank, reminded me that in high-stakes finance, people sometimes prefer counterparty risk; provided they know who to sue. There is a psychological and legal comfort in facing a regulated bank that has a history of government backstops (the Too Big to Fail insurance).
This leads to a fundamental question for the next generation of traders and corporate hedgers:
Would you rather face the settlement risk of a decentralized protocol like Uma or Polymarket, or would you rather face a Tier-1 bank?
While many people view code as law, the institutional world still prefers a throat to choke. For prediction markets to reach the scale of the global derivatives market, they must bridge this gap between the trustless efficiency of the blockchain and the legal recourse of traditional finance.
To fix this, prediction markets need to look more like the ISDA (International Swaps and Derivatives Association).
The ISDA Master Agreement is the Holy Grail of finance. It’s a standardized contract that dictates exactly what happens when things go wrong: defaults, bankruptcies, or settlement disputes. It removes the need for a middleman to decide the winner because the rules are pre-agreed upon globally.
We are starting to see a move away from:
The future is Standardized Event Contracts. These are environments where the settlement source (e.g., a specific BLS data point or a federal court filing) is hard-coded into the contract structure. By removing the centralized processor, or the human oracle, we move toward a world where a prediction market contract is as legally and financially robust as a Japanese yen swap. One could argue that event contracts are overexposed to definitional edge cases, which make them inherently difficult to compare to ISDA.
However, standardization enables the institutionalization of finite outcomes: elections, referenda, regulatory decisions as tradeable financial instruments. Events like Brexit, or the 2016 U.S. election created enormous economic consequences, yet there was no direct way for institutions to express, hedge, or transfer that risk. Standardized event contracts turn discrete outcomes into portfolio components, allowing investors to size exposure, hedge downstream impacts, and construct event-driven strategies with the same rigor applied to rates, FX, or credit. That capability unlocks a new layer of demand from asset managers, corporates, and risk desks who have historically had views on these events, but no clean way to trade them. This is how prediction markets become a trillion dollar asset class.
The potential is there. The monster in the room isn't just Anthony Edwards; it's the untapped liquidity of corporate risk waiting for a robust enough venue to call home.