.png)
Python is the language of AI. By all conventional measures, it shouldn't be.
Python is slow. Thousands of times slower than C, it loses benchmarks to languages that died decades ago.
Python is unsafe. With no compiler to catch your mistakes, your code’s flaws are exposed when it breaks in use.
Python is unserious. It was a Christmas hobby project, named after a comedy troupe, with its philosophy encoded in an Easter egg.
By every metric that mattered in 1995, Python should be a footnote. And yet, it runs the AI revolution. The vast majority of ML research is published in Python, PyTorch dominates deep learning, The entire stack from training to inference to agents is Python.
The worst language won and its flaws are the reason why.
Flow state in Python is simply writing the logic you wish to implement and realizing it just works. Often lauded as the most literate of the languages, thinking in Python is thinking about your problem, instead of syntax gymnastics or interface simulacra. This cognitive transparency enables a direct interaction with the items at work, despite post-hoc typing and linting.
Notebooks and REPL’s do exist in other languages but truly shine in Python: should your model incorporate feature X? A few lines to load the data, a one-shot attempt at a data-transformation, and running a few cells quickly illuminate the credibility of the feature. Python keeps this feedback loop tight because you can write code on the same terms you’re thinking in, operating at the same level of abstraction as the problem you’re trying to solve.
Contrast this with typed languages: you start and end by thinking about your types. What's the shape of this object? What interface does it implement? The compiler demands answers to questions you don't care about yet. You're making promises about code you haven't written, which accrues friction. Each type signature is a tax on changing your mind. You can't try the dumb thing to see if it works because the dumb thing doesn't typecheck, and this encumbrance aggregates in the feedback loop you’re trying to keep fast.
You chose Python because you'd rather be wrong fast than right slow.
Eric S. Raymond contrasted a “bazaar” model of public, rapid release, and co-developed software, with the “cathedral” of private, slow, forethought – but refined. We recognize Python as an exemplar of the bazaar, but does that really explain its success?
Python’s bazaar can be seen via PEPs, PyPI, and the broad base of developers; however, Python’s scientific stack looks more like a cathedral. NumPy, SciPy, Pandas, and Scikit-learn are all sprawling attempts to broaden the applicability of Python, but they’re managed with slow API evolution, smaller contributor groups, and a deep focus on stability of behavior.
Instructive is the tale of who contributed to the scientific libraries: domain experts who happened to code, not programmers who happened to care about science. They chose Python because it let them think about their problems, not about programming. Shared values meant specific conventions: arrays as the universal data structure, broadcasting semantics, the DataFrame as canonical tabular format. Design choices that compound. The stack coheres because each layer was built by people who used the layers below.
This diverges from the other popular language for AI: Typescript.
At the 2025 Python Language Summit, Guido van Rossum argued that Python's early imperfection was a feature, not a bug. The code was simple enough that newcomers could understand it. The lack of optimization meant everything was obvious. Early contributors didn't just use Python; they developed a stake in it. These gaps served as invitations, and Python became their baby too.
Deep learning applied this philosophy at a higher level of abstraction: building neural networks is an empirical science where the architectures most frequently emerge from experimentation, not only first principles. How many layers? What activation function? What learning rate? Theory gives you intuitions, but running the thing tells you if it works.
By 2012, when AlexNet won ImageNet, Python already owned scientific computing. The researchers who built TensorFlow and PyTorch were already Python users. The numerical stack already existed. Transformers, GPT, the entire LLM revolution, arrived into a world where Python was the default. Python spent two decades becoming the language that fits how researchers think. When the revolution came, Python was ready because it was born ready. The same qualities that attracted scientists in 1995 attracted AI researchers in 2012 and agent developers today. We shouldn’t see this recurrence as coincidence; we should interpret it as the inevitable conclusion of intention.
Agent development extends the same logic. If deep learning is empirical, agent development is doubly so. You're experimenting with prompts, tool choices, reasoning chains — work that is reminiscent of hyperparameter or architecture searches. This is why many people have observed that building agents are well-suited to computational notebooks: rapidly updating a tiny fragment of the code, seeing how it outputs, and trying again.
Notebooks are perhaps the ultimate demonstration of Python’s beautiful flaws. While REPLs and notebooks exist for other languages, notebooks have really gone mainstream in Python – and more specifically in this intersection of Python and Science.
In a world that was already more literal, the wave of AI development has pushed more developers into a hyper literal programming environment — natural language. The logic layer and the business layer can more than ever look similar, and there’s opportunity in minimizing that delta.
.png)
Humans and agents need to write, run, debug, and observe code. Both reason via natural language (for now), but can drop into syntax if needed. If you’re building systems meant to be used by both, it’s unsurprising that Python would be the language of choice — in one medium, you can define tools, run them, and interrogate what happened.
This is why we’re hosting a conference on Python for AI. Plenty of AI events are happening, but almost none speak to the people building AI products – Python developers trying to get AI systems to actually work in production. Python is an incredible language for both real systems and the rapid experimentation that AI requires.
Join us for a day at the bazaar.