Theofanis Karaletsos
My work focuses on epistemologically grounded AI systems that can reason scientifically to understand and control the physical world. To that end, I conduct research, define research programs, build organizations, and collaborate.
Career. I am a co-founder of Achira.ai, at the interface of statistical physics, AI, and biomolecular simulation for drug discovery. Most recently, I served as Head of AI for Science at the Chan Zuckerberg Initiative, where I built and led the AI for Science organization around virtual cell modeling. Previously: VP AI at Insitro, Staff Research Scientist at Meta, founding member of Uber AI Labs, and researcher at Geometric Intelligence. View more →
Research. My research is driven by a central question: what does it mean for an AI system to have a model of the world that is expressive, calibrated, and useful for decision-making? I work across the full AI stack, from the mathematical foundations of probabilistic inference and deep generative models to scalable algorithms and programming abstractions (I co-created Pyro), to their deployment as world models for scientific discovery in biology and physics. I am drawn to problems where methodology and application genuinely constrain each other. The deeper aim is AI capable of scientific reasoning, in service of human discovery. View publications →
Research Interests
Probabilistic Reasoning & Generative AI
The architecture of knowledge: what it means for AI to know something. Probabilistic reasoning is the formal language for representing knowledge and uncertainty in AI, with approximate Bayesian inference providing a powerful framework for learning and decision-making under uncertainty. This domain spans inference, uncertainty quantification, causal modeling, and the computational methods that make principled reasoning practical at scale. A core epistemological question is how a system represents what it knows and does not know, and how it reasons about interventions and counterfactuals. Compute and data efficiency are first-class properties of intelligent systems: how much a system needs to learn and reason is a direct measure of the quality of its design.
Deep Learning & Representations
Architectures that capture and organize knowledge about the world. What kinds of architectures produce representations that genuinely reflect the underlying structure of complex data? The empirical foundations span transformers and domain-specific models across language, vision, sequences, and scientific data, with inductive bias matched to the geometry and compositional structure of each domain. The central challenge is to learn representations whose latent structure is identifiable, whose factors are disentangled, and whose abstractions transfer across regimes, compose across scales, and support reasoning in new conditions. A frontier of modern AI is building models whose internal representations are rich, scalable, and capable of generalizing beyond the setting in which they were trained.
Scientific World Models
From molecules to cells: AI that understands life. Computational world models built at two scales. At the molecular scale, they take the form of virtual chemistry, molecular world models grounded in statistical physics and biomolecular simulation, encoding molecular interactions into learnable representations. At the cellular scale, virtual cell models capture behavior across single-cell biology, perturbation response, gene expression, and organismal variation. Both are generative systems that simulate virtual experiments, reason about interventions, and produce testable hypotheses.
Scientific Reasoning and Adaptive Intelligence
Scientific reasoning is among the highest expressions of intelligence in AI. It requires more than a world model: a system must identify what it does not know, seek evidence, update its beliefs, and turn understanding into action. This demands adaptive intelligence that can both model and interrogate the world in a continuous loop, learning from each interaction with environments, simulations, and users. Realizing this requires systems that can accumulate knowledge through interaction, revise their beliefs, and use world models to guide exploration and decision-making. The goal is AI that not only models the world, but improves itself through experience, becoming more capable over time.
Recent
Press
Career
Selected Publications View all →
Blog Theo-splaining →
Essays on probabilistic reasoning, world models, and AI for science. Writing soon. Theo-splaining →