Deep Learning Fundamentals
At a glance
Qualification:
Certificate of attendance "Deep Learning Fundamentals" (4 ECTS)
Start:
on request
Duration:
Costs:
CHF 2'300.00
Location:
ZHAW Life Sciences and Facility Management, Zürich, Campus Zentrum, Lagerstrasse 41(PDF 729,0 KB) or online.
Language of instruction:
English
Course days:
Objectives and content
Target audience
This course is for people who are willing to learn what deep learning is and are not scared by a challenge. You should be someone who has some background in programming (not necessarily in Python) and some background in Mathematics. If you are a beginner is fine. We will cover the basics you need, but you may need to work a bit more on your own during the classroom lectures. I will try to help you and direct you toward the right material as much as I can. In our two introductory weeks we will also cover what we need in term of mathematics and Python. This course is for you also if you are an experienced data scientist but has not worked with neural networks before. You will be able to focus on the mode specific deep learning topics and not waste time with the basics, using the time to dig deeper in the topics explained. I am happy to help you go deeper that what we will be able to cover in the hours we have during the lectures.
Objectives
Detailed Content
- Review of Python and in particular numpy and its philosophy
- Matplotlib and visualisation
- Review of linear algebra, matrix multiplications, inverse, element-wise multiplication
- Computational Graphs, Introduction to tensorflow („construction“ and „evaluation“ Phase)
- Linear Regression with Tensorflow
- Python Environment Setup, development of linear Regression Example in tensorflow
- Network with One Neuron
- Logistic and linear Regression with 1 Neuron
- Preparation of a real dataset
- Neural Networks with many layers
- Overfitting concept explanation
- Weights initialisation (Xavier and He)
- Gradient descent algorithm
- Dynamical learning rate decay
- Optimizers (Momentum, RMSProp, Adam)
- Regularisation: L1, L2 und Dropout.
- Metric analysis
- Explanation of why we need train, dev and test datasets
- How to split datasets in the deep learning context
- Strategies to solve and identify different dataset problems (overfitting, data from different sources or distributions, etc.)
- Hyperparameter Tuning
- Grid Search
- Random Search
- Bayesian Optimization
- Coarse to fine optimization
- Parameter search on a logarithmic scale
Content
Format
- Advanced training course with certificate.
- Optional homework assignments.
- Course language: English (support in lab sessions can also be given in German)
This course offers a case-based introduction on the basis of the book
U. Michelucci, Applied Deep Learning: A Case-Based Approach to Understanding Deep Neural Networks, APRESS, ISBN: 978-1-4842-3789-2
Why offer a course on applied deep learning? After all, try a google search on the subject and you will be overwhelmed by the huge number of results. The problem is that there is no course, blog or book that teaches in a consolidated and beginner friendly way advanced subjects like regularization, advanced optimisers as Adam or RMSProp, mini-batches gradient descent, dynamical learning rate decay, dropout, hyperparameter search, bayesian optimisation, metric analysis and so on.
I found material (and typically of very bad quality) only to implement very basic models on very simple datasets. If you want to learn how to classify the MNIST (hand written digits) dataset of 10 digits you are in luck (almost everyone with a blog havs done that, mostly copying the code you find on the tensorflow website). Searching something else to learn how logistic regression works? Not so easy. How to prepare a dataset to perform an interesting binary classification? Even more difficult. The goal of this course is to let you see more advanced material with new eyes. I cover the mathematical background as much as I can because I feel it is necessary for a complete comprehension of the difficulties and reasoning behind many concepts. You cannot understand why a big learning rate will make your model (strictly speaking the cost function) diverge, if you don't know how the gradient descent algorithm works mathematically. In all real-life projects, you will not have to calculate partial derivatives or complex sums, but you need to understand them to be able to evaluate what can work and what cannot (and especially why). At the end of the course we will make use of what we have learned and have a look at advanced applications of deep learning for the processing of sequencial data (time series, sound and language) as well as generative models for text and images.
-- Umberto Michelucci and Claus Horn
Exam / Credits
To obtain the 4 ECTS credits students have to get a "Passed" on
- A final project (presented as a Jupyter Notebook) where a specific use case is developed end-to-end
CAS in Digital Life Sciences
This module is part of the CAS in Digital Life Sciences continuing education programme, but can also be attended independently of the CAS. Credit points earned for this module can be credited to the CAS course at a later date, provided the relevant general conditions are fulfilled.
More information here: CAS in Digital Life Sciences
Methodology
The course is strongly based on applied exercises. Typically the lecture is split in (roughly) 40 Minutes Theory - 40 Minutes practical programming exercises - 40 Minutes Theory - 40 Minutes practical programming exercises. Sometimes we deviate from this pattern if more time is needed for one or another part. But everything we look at will also be part of exercises, so you will have a chance of trying everything we discuss. In case you want to get a good impression of how we work (and you know how to work with Jupyter notebooks) you can check the GitHut Repository for the Spring 2019 edition of the course
https://github.com/michelucci/zhaw-dlcourse-spring2019
The course follows the book https://www.apress.com/gp/book/9781484237892 so in case you are interested you can check it. In the course price is the electronic edition included.
More details about the implementation
Documents & documentation in English
Enquiries and contact
-
Umberto Michelucci studied Theoretical Physics in Italy, USA and Germany. He is author of two books on deep learning and has published several peer-reviewed papers in Machine Learning. He is actively collaborating with many universities in Europe (ETH, University of Basel, University of Torino, Politecnico di Torino between others) in many research projects (including EU funded ones in the program Horizon 2020) that deal with artificial intelligence in many fields as medicine, economics, insurance, mathematics, physics and astrophysics. He is also the founder of TOELT llc, a company that does international research in artificial intelligence in many fields (www.toelt.ai), and leads the AI Center of Excellence at Helsana Versicherung AG.
-
Dr. Claus Horn is a physicist (formerly working at CERN and Stanford University) with over ten years of experience in leading applied AI projects in various industries in Switzerland. He works as a lecturer and researcher at ZHAW where he focuses on the development of life science-specific methods and applications of artificial intelligence. He is the founder of the Reinforcement Learning Zurich community and leads the continued education at the Institute of Applied Simulation.
Provider
Instructors
- Dr. Umberto Michelucci (Linkedin)
- Dr. Claus Horn
Application
General terms and conditions
Start | Application deadline | Registration link |
---|---|---|
on request | on request | Application |