Maximillian K. Machado

CS PhD student at Duke University · Interpretable Machine Learning Lab · Advised by Dr. Cynthia Rudin


prof_pic.jpg

Why care about interpretable models when “black-box” networks usually get the answer right?

This is a common misconception and the heart of my research. In the Interpretable Machine Learning (IML) lab, I design interpretable computer vision models and apply them to high-stakes domains like breast cancer risk prediction. A core part of my research is distilling high-dimensional data into insights stakeholders can understand and act on.

So why does interpretability matter? It tells you three things black-box models can’t: whether your model cheated on confounders, what genuine patterns it actually found (often clinically meaningful), and opportunities for humans to directly influence model behavior.

My goal: build models that don’t just predict accurately, but uncover the mechanisms behind the data — earning the trust to deploy where every prediction shapes a patient outcome.

selected publications

  1. cosinepaper.png
    Cosine Similarity is Almost All You Need (for Prototypical-Part Models)
    Luke Moffett, Frank Willard, Maximillian Machado, and 8 more authors
    In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2026