13:30 - 13:35 | | Welcome + opening remarks |
13:35 - 14:00 | | Two SASHIMI orals (10 + 2 min) |
| | Self-Supervised Super-Resolution for Anisotropic MR Images with and without Slice Gap (poster,code) |
| | Remedios, Han, Zuo, Carass, Pham, Prince, Dewey |
|
| | Unsupervised Liver Tumor Segmentation with Pseudo Anomaly Synthesis (poster) |
| | Zhang, Deng, Li |
14:00 - 14:15 | | SynthRAD challenge overview |
14:15 - 14:40 | | Three SynthRAD orals (10 + 2 min, 2x 5 + 1 min) |
| | A Hybrid Network with Multi-scale Structure Extraction and Preservation for MR-to-CT Synthesis in SynthRAD2023 (10+2 min) |
| | Zeli Chen, Kaiyi Zheng, Chuanpu Li, and Yiwen Zhang |
|
| | Synthesis of CT images from MRI images based on nnU-Net (5+1 min) |
| | Haowen Pang, Chuyang Ye |
|
| | A Self-Pretraining Paradigm For CBCT-CT Translation (5+1 min) |
| | Runqi Wang, Zheng Zhang, Ruizhi Hou, Lei Xiang, and Tao Song |
|
14:40 - 15:20 | | 1-min poster highlights (9 SASHIMI + 20 SynthRAD) |
| | Transformers for CT Reconstruction From Monoplanar and Biplanar Radiographs (poster,code) |
| | Khader, Müller-Franzes, Han, Nebelung, Kuhl, Stegmaier, Truhn |
|
| | Physics-Aware Motion Simulation for T2*-Weighted Brain MRI (code) |
| | Eichhorn, Hammernik, Spieker, Epp, Rueckert, Preibisch, Schnabel |
|
| | TAI-GAN: Temporally and Anatomically Informed GAN for early-to-late frame conversion in dynamic cardiac PET motion correction (code) |
| | Guo, Shi, Chen, Zhou, Liu, Xie, Liu, Palyo, Miller, Sinusas, Spottiswoode, Liu, Dvornek |
|
| | Improving style transfer in dynamic contrast enhanced MRI using a spatio-temporal approach (poster) |
| | Tattersall, Goatman, Kershaw, Semple, Dahdouh |
|
| | Synthetic Singleplex-Image Generation in Multiplex-Brightfield Immunohistochemistry Digital Pathology using Deep Generative Models |
| | Lorsakul, Martin, Landowski, Walker, Flores, Clements, Olson, Ferreri |
|
| | DIFF·3: A latent diffusion model for the generation of synthetic 3D echocardiographic images and corresponding labels (poster,code) |
| | Ferdian, Zhao, Maso Talou, Quill, Legget, Doughty, Nash, Young |
|
| | Learned Local Attention Maps for Synthesising Vessel Segmentations from T2 MRI (poster) |
| | Deo, Bonazzola, Dou, Xia, Wei, Ravikumar, Frangi, Lassila |
|
| | How Good Are Synthetic Medical Images? An Empirical Study with Lung Ultrasound (code) |
| | Yu, Kulhare, Mehanian, Delahunt, Shea, Laverriere, Shah, Horning |
|
| | Super-resolution Segmentation network for inner-ear tissue segmentation |
| | Liu, Fan, Lou, Noble |
|
| |
Image translation using ShuffleUNet (poster)
|
| |
Juhyung (Tony) Ha, Jong Sung Park
|
|
| |
Swin UNETR Based MRI-to-CT and CBCT-CT Synthesis (poster)
|
| |
Fuxin Fan, Jingna Qiu, Yixing Huang
|
|
| |
Paired MR-to-sCT Translation using Conditional GANs - an Application to MR-guided Radiotherapy (poster)
|
| |
Alexandra Alain-Beaudoin, Laurence Savard, Silvain Bériault
|
|
| |
Synthetic CT generation from CBCT images: Short Paper for SynthRAD 2023 (poster)
|
| |
Pengxin Yu
|
|
| |
Generate CT from CBCT using DDIM
|
| |
Gengwan Li, Xueru Zhang
|
|
| |
MR to CT Synthesis using U-net
|
| |
Hongbin Guo, Zhanyao Huang
|
|
| |
Team KoalAI: Locally-enhanced 3D Pix2Pix GAN for Synthetic CT Generation (poster)
|
| |
Bowen Xin, Aaron Nicolson, Hilda Chourak, Gregg Belous, Jason Dowling
|
|
| |
CT Synthesis with Modality-, Anatomy-, and Site-Specific Inference (poster)
|
| |
Yubo Fan, Han Liu, Ipek Oguz, and Benoit M. Dawant
|
|
| |
SynthDiffuson at SynthRAD 2023 Task 1: Synthesizing Computed Tomography for Radiotherapy (poster)
|
| |
Lujia Zhong, Zhiwei Deng, Shuo Huang, Wenhao Chi, Jianwei Zhang, Yonggang Shi
|
|
| |
Multi-Planar Convolutional Neural Networks for MRI and CBCT to CT Translation (poster)
|
| |
Gustav Muller-Franzes, Firas Khader, Daniel Truhn
|
|
| |
Conditional GAN is all you need for MR2CT (poster)
|
| |
Xia Li, Ye Zhang
|
|
| |
A Simple Two-stage network For MR-CT Translation
|
| |
Zhihao Zhang, Long Wang, Tao Song, and Lei Xiang
|
|
| |
Synthetic CT Generation from CBCT using MSG-GAN
|
| |
Lu Bai, Chenyu, Chenqi, Shaobin Wang, Yi Du
|
|
| |
SynthRAD 2023: Synthetic CT from MRI
|
| |
Derk Mus, Bram Kooiman, Rick Bergmans, Jara Linders
|
|
| |
Guiding Unsupervised MRI-to-CT and CBCT-to-CT synthesis using Content and style Representation by an Enhanced Perceptual synthesis (CREPs) loss (poster)
|
| |
Cedric Hemon, Valentin Boussot, Blanche Texier
|
|
| |
A multi-channel cycleGAN for CBCT to sCT generation (poster)
|
| |
Chelsea A. H. Sargeant, Edward G. A. Henderson, Dónal M. McSweeney, Aaron G. Rankin, Denis Page
|
|
| |
Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks
|
| |
Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa
|
|
| |
MR to CT translation using Generative Adversarial Networks (poster)
|
| |
Reza Karimzadeh, Bulat Ibragimov
|
|
| |
SynthRAD 2023 - MRI-to-sCT generation to facilitate MR-only Radiotherapy
|
| |
Thomas Helfer, Walter Hugo Lopez Pinaya, Francisco Pereira, Adam G. Thomas, Jessica Dafflon
|
|
| |
MRI-to-sCT and CBCT-to-sCT generation methods in SynthRAD2023
|
| |
Zijie Chen, Enpei Wang
|
|
15:20 - 16:15 | | Coffee + joint poster session |
16:15 - 17:00 | | Keynote |
17:00 - 17:25 | | Two SASHIMI orals (10 + 2 min) |
| | Multi-Phase Liver-Specific DCE-MRI Translation via a Registration-Guided GAN (poster,code) |
| | Liu, Li, Shi, Zhou, Gao, Shi, Zhang, Zhuang |
|
| | Unsupervised heteromodal physics-informed representation of MRI data: tackling data harmonisation, imputation and domain shift (poster) |
| | Borges, Fernandez, Tudosiu, Nachev, Ourselin, Cardoso |
17:25 - 17:30 | | Sponsor message |
17:30 - 17:35 | | Award + closing |
For all the papers, please upload poster highlight video and poster PDF before October 2.
Format: pdf, maximum file size 3 MB. We recommend A0 size with at least 32 font size, 72 DPI (3370 x 2384).
Physical poster in Portrait format. The maximum poster size for MICCAI 2023 is A0, (i.e. 841 x 1189 mm or 33.1 x 46.8 in) (Width x Height) portrait format. Please adhere to this format.
The talk will be 10+2 min, including Q&A. If you plan to attend SASHIMI virtually, please upload both slides and a 10 min recorded video (.mp4 format) before October 2. If you plan to attend SASHIMI in-person, please email us with the presenter's name and the paper ID/title. We still recommend you upload slides and video to us as a backup.