CARS 2019 VIDEOCAST
※ Click conference name to add its schedule on your Google Calendar
Roy Eagleson, PhD
Georges Hattab, PhD
The use of AR and VR modalities for visualization of 3D biomedical image data is feasible thanks to a growing number of hardware and software solutions. As the technical challenges and development hurdles subside, it is increasingly important to consider the special capacities and constraints of the human perceptual, motor, and cognitive systems. From a system design perspective, empirical research into the human-computer interface performance should inform the development process. We will present essential design notions including, but not limited to, task oriented design, lateral and vertical transformations, and user interface design principles. For the purpose of this tutorial, we plan a practical or hands-on session. Given different visualization tasks, participants work in small groups to create appropriate visualizations.
Sonia M. Pujol, PhD
Javier Pascau, PhD
Gabor Fichtinger, PhD
SlicerIGT is an established open-source platform for navigation of interventional medical procedures. It has been used to implement experimental and clinical research systems in many specialties from ultrasound-guided injections to brain surgery. The platform supports real time communication with most tracking, imaging, and sensor devices. It also supports communication with major commercial navigation systems to experiment with additional features for existing procedures. The tutorial will focus on computer vision and deep learning for real time annotation of surgical events. The tutorial consists of two sessions. First, invited speakers give an overview of open-source resources and talk about their vision for the future applications of these research tools. In the second session, the audience will build a working surgical simulation software on their laptops, using devices provided by the presenters. Participants will gain hands-on experience in the basics of intervention navigation technology, as well as integration of advanced real-time image processing algorithms.
Daniel Lückehe, PhD
Gabriele von Voigt,
In this tutorial, we show medical scientists how to use methods from the field of deep learning (DL) as tools to solve problems in their decisions processes. It is designed for medical experts with no or only a little experience in the field of DL. No prior knowledge is required. An introduction to the field of Deep Learning (DL) and neural network implementations with Tensorflow is given. Then, there will be hands-on examples for three types of Deep Neural Networks (DNNs) which are especially relevant for the medical field: Convolutional Neural Networks, Deep Residual Networks, and the U-Net from Convolutional Networks for Biomedical Image Segmentation.
John S.H. Baxter,
The purpose of the Advanced Deep Learning for Medical Imaging Data tutorial is to expose participants to some of the richness of deep learning methods, fo- cused on developing a more solid theoretical background as to how they operate. This tutorial is designed to complement the “Applied Deep Learning for Medical Scientists Working with Image Data” tutorial and will be designed for about 3 hours of instruction with an additional hour for questions and discussion. This tutorial will focus on the following points:
Introduction to probability distributions with a view towards output ac- tivation and corresponding loss functions.
Introduction to probabilistic graphical models, their solution algorithms, and their integration into deep learning.
– Implement conditional random fields with Gaussian priors for image segmentation
Extending TensorFlow with custom operators & gradients, and
Adversarial losses and matching unknown distributions.
Nicola Rieke, PhD
Deep Learning is reshaping the healthcare industry and continues to establish itself as the de facto tool for numerous medical applications. This hands-on workshop explores the usage of Deep Learning in Medical Imaging starting from basic Image Classification (Part I) to advanced Data Augmentation and Segmentation with Generative Adversarial Networks (Part II). Technical requirements: Important: participants need to bring their own laptop!