Stay up-to-date on our team's latest theses and research
8 months of freezing cold each year is a great reason to lock in.
So Theory ventured up north to Waterloo, Ontario, to meet the next generation of incredible founders and technologists.
We were treated to the snowiest year since 1950. Classes were canceled and streets were buried, the kind of weather that gives you an excuse to stay home.
Instead, over 100 students from all years and programs joined us for two panels aimed at early-career engineering and AI jobs.
Waterloo has a way of filtering for people who show up even when it’s uncomfortable.
The coat racks were overflowing while Prashanth (LanceDB), Nick (Neolific), and Anton (Clover Labs) started the night chatting with our Head of AI, Bryan Bischof (Theory Ventures) on how startups hire.
The conversation was candid, practical, and refreshingly casual, much like how early-stage hiring really works.
We discussed:
For students, it offered a clearer picture of how things actually work. For founders and operators in the room, it was a reminder that the best hires are often the ones who show initiative before they’re “ready.”
The second panel shifted from hiring mechanics to career risk.
I led a conversation with Jakob (Voltra), Sam (Upside Robotics), and Jerry (Akatos House) on what it actually means to break into the startup space.
Our conversation was centred around a few deceptively simple questions:
A recurring theme was that early careers are more about exposure than optimization. Spending your time in an environment that forces you to grow matters more than optimizing for title, brand, or timing.
Theory’s welcomed more than 8 Waterloo interns to our engineering team to work on AI and Data applications. Our internship program focuses on real-products that we actually use: like Pipelines to process, extract, and store call recordings, Multi-modal AI Tools for financial reporting, and Evaluating LLMs.
Additionally, our portfolio of investments is constantly expanding, and many of these companies are also turning to Waterloo to search for interns and full-time hires.
Our visit to campus was focused on recruiting directly for our own team, high-trust introductions to our portfolio companies, and building relationships with founders at the very beginning of their journey.
The benefits of being on campus outweighed the inconvenience of the snow.
Huge thanks to everyone who made this event possible: Waterloo Venture Group for their collaboration on the event, Communitech for the event space, and everyone who helped make the night what it was, including teams from LanceDB, Neolific, Akatos, Voltra, Clover Labs, and Upside Robotics.
Most of all, thank you to the students who showed up despite the weather.
We’ll be back.
Until recently, software was crafted by hand. Engineers wrote code like sculptors, painstakingly shaping and smoothing it into its final form.
LLM-powered coding agents replaced this process with something closer to 3D printing. Describe the object you want, and one pops out. The craft shifted from hand-sculpting to specification: tell the agent what to build, and it builds it.
But the future of software production is not 10 engineers standing in front of 10 3D printers. It’s not even 10 engineers managing 100 3D printers.
It’s something more radical: AI engineers will be managed and optimized by AI managers – potentially multiple levels of them. Humans won’t be in the production flow. Instead, they’ll be leaders and maintenance crews for self-optimizing agent factories.
The best companies are already starting to work this way. What does it mean for the industry?
The first impact of coding agents was turning software engineers into engineering managers/tech leads. Instead of writing code, they communicate goals in natural language, then provide feedback and review outputs as AI agents do the building.
Talented engineers have pushed this even further; they might manage 5-6 concurrent agents today. But this is naturally limited by the cognitive load of context-switching. Better agent management and collaboration platforms might increase that ratio by a factor of two. How could we increase our leverage by an order of magnitude, or more? Human managers won’t cut it.
The most recent generation of models have demonstrated they can not not only do the work, but plan and manage it.
Instead of writing prompts and reviewing outputs themselves, the best engineers are now moving even further up the stack. They're creating manager agents who plan work, orchestrate builder agents, review their outputs, suggest changes, and update prompts – all on their own.
The shift is striking. These engineers are typically not looking at code. They're often not even looking at prompts used to generate the code. They're operating at a higher layer of abstraction: defining goals, setting constraints, and evaluating outcomes. The actual work can then be done by dozens or even hundreds of agents at once.
How far up the management ladder will AI climb?
We don't see fully lights-off AI software factories in the near future. But the roles that remain for people will look very different from today's org chart, emerging as two key categories.
Senior leadership: Even as AI can analyze data and research best practices, strategic decision-making often sits in difficult gray areas. People will play a role here for some time; both because they can make these decisions effectively, and because we’ll want human accountability for them.
A lot of this work will be product-oriented: people can sit face-to-face with customers to hear the nuance in their requests, understand where analytics data might be biased, or use a vision to inform decisions when there is no data at all.
There are also strategic decisions in core engineering: for example, whether to architect a system for simplicity and development speed, or to build it for large scale from the start.
Maintenance and support staff: To enable factory-scale operations, agents will need new infrastructure to enable closed-loop development, iteration, and deployment.
Last week I wrote about sandbox environments, one component of the agent-native DevOps stack. There’s a lot more that agent factories will need: git rethought for agent-scale throughput; sophisticated experimentation platforms; and underlying data pipelines, orchestration, and inference infrastructure.
There will be a number of roles for humans to design the plumbing and wiring for their agentic workforce (though agents will undoubtedly help build it).
We are entering unprecedented times. We will no longer build things directly; we will set up and support mostly-autonomous, non-deterministic systems, guiding them to our desired outcomes.
The impacts will be dramatic. Startups will move much more quickly: we see portfolio companies deploying small, highly-autonomous teams of 2-3 people, building multiple products in parallel that each could have staffed 10 people just a year ago. And as Tomasz wrote about last week, incumbents will restructure too: Block led the way with a nearly 50% reduction of their team.
As companies are evolving to this new state, there will be a huge gap in productivity between average and best-in-class teams. The best ones have three things.
On the last point, the next generation of AI infrastructure is clear to us: it’s the tooling required for agent factories to run effectively at massive scale. We will share more of our theories here in the coming months!
If you're thinking about or building for this new world, I’d love to hear from you: at@theoryvc.com.