Nature Machine Intelligence, Published online: 23 March 2026; doi:10.1038/s42256-026-01177-0
We introduce a framework to analyse interpretability in deep learning, by drawing on a formal notion of model semantics from the philosophy of science. We argue that interpretability is only one aspect of a model’s semantics and illustrate our framework with examples from biomedicine.

