MITRP Presents Three Posters at WisconsINFORMATICS 2026
MITRP Presented Three Posters at WisconsINFORMATICS 2026
On February 3rd, 2026, the Molecular Imaging Technology Research Program (MITRP) participated in the annual WisconsINFORMATICS conference at the Health Science Learning Center (HSLC). Informatics is central to our mission of extracting meaningful data from these modalities, and this year we contributed three posters highlighting how deep learning can improve clinical workflows and model evaluation.
One of the major challenges in modern medical AI is effectively comparing the growing number of available models. James Milgram, Jake Yun, Xue Li, and Alan B. McMillan presented a poster titled “A Modular Framework for Standardized Benchmarking of Medical Imaging Foundation Models“. As Foundation Models (FMs) become a new paradigm for AI workflows, moving away from models trained from scratch, the team addressed the difficulty of comparing these architectures across different tasks. They introduced a model-agnostic framework for benchmarking FMs and found that models pretrained on diverse datasets generally produced more generalizable embeddings than domain-specific models. These findings help guide the selection of robust tools for future medical imaging tasks.
The group also explored how to streamline radiation therapy and PET imaging workflows through better image synthesis. Tracy He and Alan B. McMillan presented “High-Fidelity MR-to-CT Synthesis in Brain Imaging via an Adapted nnU-Net Framework“. The researchers highlighted that precise synthetic CTs (sCTs) are vital not only for MRI-only radiation therapy, which reduces patient radiation exposure, but also for PET attenuation correction, streamlining clinical workflows for PET/MR imaging. By adapting the nnU-Net framework—typically used for segmentation—to perform 3D intensity regression, the model successfully learned the non-linear mapping between MRI intensities and CT Hounsfield Units, even when trained on small datasets. This approach validated a practical, automated alternative to traditional, labor-intensive sCT modeling.
In the third poster, the lab examined the trade-offs between model complexity and computational efficiency. Xue Li, Dvij Sharma, Carl Kashuk, Orhan Unal, Richard Bruce, John W. Garrett, and Alan B. McMillan shared their work on the “Comparative Efficacy of Foundation Models versus End-to-End Deep Learning for Tumor Classification“. This study compared naïve 2D Foundation Models using lightweight adapters against traditional 3D end-to-end models for classifying tumors. The results were promising for resource-constrained environments: the FM-based approach was approximately 80 times faster on CPUs than end-to-end training on GPUs. This demonstrated that aggregating 2D representations can be a scalable, resource-efficient alternative to costly 3D modeling without sacrificing accuracy.
The conference provided an excellent venue for the UW-Madison community to discuss data-driven innovations. Following the morning lightning talks, our team engaged with attendees during the poster session in the Health Sciences Learning Center (HSLC) atrium, discussing how these informatics approaches are solving real-world challenges in medical imaging.