Maximillian K. Machado
CS PhD student at Duke University · Interpretable Machine Learning Lab · Advised by Dr. Cynthia Rudin
Why care about interpretable models when “black-box” networks usually get the answer right?
This is a common misconception and the heart of my research. In the Interpretable Machine Learning (IML) lab, I design interpretable computer vision models and apply them to high-stakes domains like breast cancer risk prediction. A core part of my research is distilling high-dimensional data into insights stakeholders can understand and act on.
So why does interpretability matter? It tells you three things black-box models can’t: whether your model cheated on confounders, what genuine patterns it actually found (often clinically meaningful), and opportunities for humans to directly influence model behavior.
My goal: build models that don’t just predict accurately, but uncover the mechanisms behind the data — earning the trust to deploy where every prediction shapes a patient outcome.