JUNE 18, Tuesday morning
Roy Eagleson, PhD
University of Western Ontario (CA)
Georges Hattab, PhD

National Center for Tumor Diseases (NCT) Dresden (DE)

TUT1: Tutorial AR/VR: Perceptual Capacities and Constraints in AR/VR for the visualization of 3D biomedical image data

The use of AR and VR modalities for visualization of 3D biomedical image data is feasible thanks to a growing number of hardware and software solutions. As the technical challenges and development hurdles subside, it is increasingly important to consider the special capacities and constraints of the human perceptual, motor, and cognitive systems. From a systems design perspective, empirical research into the human-computer interface performance should inform the development process. We will present essential design notions including, but not limited to, task oriented design, lateral and vertical transformations, and user interface design principles. Generally, this can take the form of 'development guidelines' or alternatively, as anti-patterns which alert the designer to the principles that should not be violated. For the purpose of this tutorial, we plan a practical or hands-on session. Given different visualization tasks, participants work in small groups to create appropriate visualizations. We anticipate that this could lead to an exploration of the design space. In turn, we could indicate what might benefit from adjustment or be better suited to the task at hand.

This tutorial will be a forum for Scientific and Engineering developments in this area, as well as for two talks which overview these different aspects of the field.

Sponsored by
JUNE 18, Tuesday afternoon
Tamas Ungi, PhD
Queen's University (CA)
Sonia M. Pujol, PhD
Brigham and Women's Hospital, Harvard Medical School(US)
Javier Pascau, PhD
Universidad Carlos III de Madrid (ES)
Gabor Fichtinger, PhD
Queen's University (CA)
TUT2: Tutorial SlicerIGT: Deep learning and computer vision for real-time procedure annotation

SlicerIGT is an established open-source platform for navigation of interventional medical procedures. It has been used to implement experimental and clinical research systems in many specialties from ultrasound-guided injections to brain surgery. The platform supports real time communication with most tracking, imaging, and sensor devices. It also supports communication with major commercial navigation systems to experiment with additional features for existing procedures. A series of SlicerIGT tutorials have been presented in the past years, focusing on a different topic each year. This year, the topic is computer vision and deep learning for real time annotation of surgical events. This new feature of SlicerIGT enables detection of tools and gestures in video streams. Video-based data collection can be applied on tools that are not traditionally tracked by optical or electromagnetic sensors. This new feature of SlicerIGT significantly expands the potential applications that can be built on the platform, both in intervention navigation and in simulation-based training.

The tutorial consists of two sessions. First, invited speakers give an overview of open-source resources and talk about their vision for the future applications of these research tools. In the second session, the audience will build a working surgical simulation software on their laptops, using devices provided by the presenters. Participants will gain hands-on experience in the basics of intervention navigation technology, as well as integration of advanced real-time image processing algorithms.

    -Preliminary program- 

 13:30  Welcome and introduction

 13:45  Short lectures

 13:45  Alexandra Golby, Harvard Medical School, USA

 14:00  Danail Stoyanov, University College London, UK


 14:45  Tamas Ungi, Queen's University, Canada

 15:00  Preparations for the hands-on session

 15:30  Coffee break

 16:00  Hands-on tutorial

 17:30  Adjourn

JUNE 18, Tuesday morning

Daniel Lückehe, PhD

Leibniz Universität Hannover (DE)

Gabriele von Voigt, PhD

Leibniz Universität Hannover (DE)
TUT3: Tutorial DL-1: Applied Deep Learning for Medical Scientists working with Image Data

In this tutorial, we would like to show medical scientists how to use methods from the field of deep learning as tools to solve problems in their decisions processes. The tutorial will be designed for about 3-4 hours. As it is impossible to show all aspects in this amount of time, we will focus on the following points:

 –  Introduction to the field of Deep Learning (DL) and neural network imple- mentations with Tensorflow

 –  Hands-on examples for three types of Deep Neural Networks (DNNs) which are especially relevant for the medical field:

     1. To classify images
         - Basic: Convolutional Neural Networks (CNNs) - Advanced: Deep Residual Networks (ResNets)

     2. To segment images
         - U-Net from Convolutional Networks for Biomedical Image Segmen-

–  Additionally, an outlook to further methods and network architectures like Recurrent Neural Networks to classify time-series will be given

JUNE 18, Tuesday afternoon

John S.H. Baxter, PhD

University of Rennes (FR, CA)
TUT4: Tutorial DL-2: Advanced Deep Learning for Medical Imaging Data

-Preliminary program- 

First Session (30 min) - Output Activations and Loss Functions

  • Refresher - output activations, loss functions, optimisation details
  • Interpreting DL through probability theory - maximum likelihood estimation, Kullback-Liebler divergence, etc...

  • Implement L2 losses incorporating uncertainty (Gaussian model)


Second Session (45 min) - Probabilistic Graphical Models in TF

  • Motivation - Separation of learning and inference

  • Solving probabilistic graphical models in DL - maximum probability esti-

  • mation, marginal probability estimation, & mean field approximations

  • Dense conditional random fields with Gaussian priors

– Marginal probability solution algorithm

– Implementation using Tensor operations

  • Discussion on implementation details of PGMs in deep learning


Third Session (45 min) - Custom Operations in TensorFlow

  • Motivation - Memory consumption and gradient calculation of Tensor op- erations used in iterative algorithms

  • Implementing a custom Tensor operation in C++

– Registering C++ operation in TensorFlow, defining operation inputs and outputs
– Additional considerations for using CUDA

  • Implementing a custom gradient operation in C++


Fourth Session (45 min) - Adversarial Losses

  • Motivation - Learning loss functions, matching unknown distributions

  • Implement MNIST variational autoencoder with mixed losses

  • Discuss pitfalls of adversarial losses in terms of probability theory and optimisation

JUNE 21, Friday afternoon
Nicola Rieke, PhD
TUT5: Hands-on tutorial on advanced Deep Learning for Medical Imaging

Part 1: Image Classification using the MedNIST dataset


Get a hands-on practical introduction to deep learning for radiology and medical imaging. You'll learn how to:


· Collect, format, and standardize medical image data

· Architect and train a convolutional neural network (CNN) on a dataset

· Use the trained model to classify new medical images


Upon completion, you’ll be able to apply CNNs to classify images in a medical imaging dataset.


Prerequisites: Basic experience with Python


Part 2: Data Augmentation and Segmentation with Generative Networks for Medical Imaging


A generative adversarial network (GAN) is a pair of deep neural networks: a generator that creates new examples based on the training data provided and a discriminator that attempts to distinguish between genuine and simulated data. As both networks improve together, the examples created become increasingly realistic. This technology is promising for healthcare, because it can augment smaller datasets for training of traditional networks. You'll learn to:


· Generate synthetic brain MRIs

· Apply GANs for segmentation

· Use GANs for data augmentation to improve accuracy


Upon completion, you'll be able to apply GANs to medical imaging use cases.


Prerequisites: Experience with CNNs

Sponsored by


Sign up for our latest news