America's Next Top Modeler: A Hackathon Built for AI Engineering

Oct 21, 2025
Oct 21, 2025

The finale of most hackathons is the demos. Fun, but ultimately dominated by presentation skills, slick visuals, and subjective "vibes," leaving core technical quality unmeasured. This fails to even consider the central challenge of building modern AI systems: Can you build an agent that is objectively useful?

We designed the America's Next Top Modeler: The Context Engineering Hackathon to answer that question. This is the first hackathon (that we are aware of) that moves beyond demos to focus entirely on AI engineering quality. Participants will compete to design and optimize Context Agents that navigate complex data environments. Your agents will be judged by a set of objective evaluations designed to expose flaws, not by subjective judges. If you are an engineer or builder who wants to test your skills and prove your approach delivers real performance, this is your chance.

Beyond the Framework Hype

Many believe that installing a popular framework is the final step in building an effective AI agent. One could even say there are several tribes that have coalesced around tools, claiming to have found the secret sauce that makes AI “just work”. 

This hackathon invites you to put your beliefs and code to the ultimate test. We invite practitioners to bring their favorite tools to bear, whether that be DSPy, LangGraph, LlamaIndex, TextQL, BAML, or pure ingenuity.  Do you believe in the primacy of your favorite programming language so much that you think it gives you an unreasonable edge? 

The goal is to build Context Agents that can reliably extract, transform, and reason over structured and unstructured data that simulates real enterprise environments. Our suite of evals will reveal which approaches are effective at solving these problems, allowing us to learn what actually works.

What’s Special About This Hackathon?

  • Pre-defined evals that solving proves value. Your agents will be judged solely on their performance against rigorous, pre-built evals. There are no presentation scores, no biases, and no subjective opinions to influence the rankings. The outcome is based purely on how effective your agent is at completing the given tasks.
  • The Data Science for AI Imperative. This hackathon highlights the essential role of applying data science principles to debug and optimize modern AI systems. Simply using an agent framework is  insufficient; success depends on how smartly you approach feature engineering, tool definition, and context engineering itself. You will learn that achieving reliable agent performance requires iterative data work, not just configuration.
  • A Focus on Engineering Depth. The goal is to build a high-quality agent. Our evaluations will measure agent performance across multiple stages of complexity, including multi-hop reasoning and combining information from diverse data sources (e.g., databases, PDFs, logs). This process mirrors the challenges faced when deploying a truly valuable, work-ready agent.

Hosted by Theory Ventures and featuring applied AI engineering experts Bryan Bischof and Hamel Husain, this event emphasizes the reliability and quality of AI systems. Most importantly, this hackathon offers you a rare chance to earn bragging rights based on quantifiable performance.

Logistical Details

Join us in San Francisco to put your AI engineering skills to the test.

  • Date and Time: Saturday, November 15, from 11:00 AM - 8:00 PM PST.
  • Location: San Francisco, California.
  • Format: Individuals or teams of two.
  • Provided: OpenAI credits to power your agents, along with food and drinks.
  • Special prizes and swag.

Register now to join.

Get the latest in AI & data, straight to your inbox.

Thanks for subscribing!
Oops! Something went wrong while submitting the form.