Deep Learning Fundamentals
This course offers a case-based introduction on the basis of the book
U. Michelucci, Applied Deep Learning: A Case-Based Approach to Understanding Deep Neural Networks, APRESS, ISBN: 978-1-4842-3789-2
Why offer a course on applied deep learning? After all, try a google search on the subject and you will be overwhelmed by the huge number of results. The problem is that there is no course, blog or book that teaches in a consolidated and beginner friendly way advanced subjects like regularization, advanced optimisers as Adam or RMSProp, mini-batches gradient descent, dynamical learning rate decay, dropout, hyperparameter search, bayesian optimisation, metric analysis and so on.
I found material (and typically of very bad quality) only to implement very basic models on very simple datasets. If you want to learn how to classify the MNIST (hand written digits) dataset of 10 digits you are in luck (almost everyone with a blog havs done that, mostly copying the code you find on the tensorflow website). Searching something else to learn how logistic regression works? Not so easy. How to prepare a dataset to perform an interesting binary classification? Even more difficult. The goal of this course is to let you see more advanced material with new eyes. I cover the mathematical background as much as I can because I feel it is necessary for a complete comprehension of the difficulties and reasoning behind many concepts. You cannot understand why a big learning rate will make your model (strictly speaking the cost function) diverge, if you don't know how the gradient descent algorithm works mathematically. In all real-life projects, you will not have to calculate partial derivatives or complex sums, but you need to understand them to be able to evaluate what can work and what cannot (and especially why).
-- Umberto Michelucci
Auf einen Blick
Abschluss: Kursbestätigung (4 ECTS)
Start: 17.09.2019 17:20
Dauer: Abendkurs 12x3 Std., Dienstagabend 17:15 - 20:00
Kosten: CHF 1'990.00
Bemerkung zu den Kosten:
Standard Kosten: CHF 1990 (inkl. Kursmaterial) (CHF 1900 if registered before 1.7.2019)
Für Studierende (mit Nachweis): CHF 430 (inkl. Kursmaterial)
Raum ZL 03.09
17.09.2019, 24.09.2019, 01.10.2019, 08.10.2019, 22.10.2019, 29.10.2019, 05.11.2019, 12.11.2019, 19.11.2019, 26.11.2019, 03.12.2019
Please note that on 15.10.2019 there is no lecture.
Whatever you are studying right now, if you are not getting up to speed on deep learning, neural networks, etc., you lose.
Ziele und Inhalt
This course is for people who are willing to learn what deep learning is and are not scared by a challenge. You should be someone who has some background in programming (not necessarily in Python) and some background in Mathematics. If you are a beginner is fine. We will cover the basics you need, but you may need to work a bit more on your own during the classroom lectures. I will try to help you and direct you toward the right material as much as I can. In our two introductory weeks we will also cover what we need in term of mathematics and Python. This course is for you also if you are an experienced data scientist but has not worked with neural networks before. You will be able to focus on the mode specific deep learning topics and not waste time with the basics, using the time to dig deeper in the topics explained. I am happy to help you go deeper that what we will be able to cover in the hours we have during the lectures.
- Review of Python and in particular numpy and its philosophy.
- Matplotlib and visualisation.
- Review of linear algebra, matrix multiplications, inverse, element-wise multiplication,
- Computational Graphs, Introduction to tensorflow („construction“ and „evaluation“ Phase)
- Linear Regression with Tensorflow
- Python Environment Setup, development of linear Regression Example in tensorflow
- Network with One Neuron
- Logistic and linear Regression with 1 Neuron
- Preparation of a real dataset
- Neural Networks with many layers
- Overfitting concept explanation
- Weights initialisation (Xavier and He)
- Gradient descent algorithm
- Dynamical learning rate decay
- Optimizers (Momentum, RMSProp, Adam)
- Regularisation: L1, L2 und Dropout.
- Metric analysis
- Explanation of why we need train, dev and test datasets
- How to split datasets in the deep learning context
- Strategies to solve and identify different dataset problems (overfitting, data from different sources or distributions, etc.)
- Hyperparameter Tuning
- Grid Search
- Random Search
- Bayesian Optimization
- Coarse to fine optimization
- Parameter search on a logarithmic scale
Advanced training course with certificate. 4 ECTS credits.
Optional homework assignments.
Course language: English (support in lab sessions can also be given in German)
Exam / Credits
To obtain the 4 ECTS credits students have to get a «Passed» on
- A final project (presented as a Jupyter Notebook) where a specific use case is developed end-to-end
The course is strongly based on applied exercises. Typically the lecture is split in (roughly) 40 Minutes Theory - 40 Minutes practical programming exercises - 40 Minutes Theory - 40 Minutes practical programming exercises. Sometimes we deviate from this pattern if more time is needed for one or another part. But everything we look at will also be part of exercises, so you will have a chance of trying everything we discuss. In case you want to get a good impression of how we work (and you know how to work with Jupyter notebooks) you can check the GitHut Repository for the Spring 2019 edition of the course
The course follows the book www.apress.com/gp/book/9781484237892 so in case you are interested you can check it. In the course price is the electronic edition included.
Unterlagen & Dokumentation in Englisch
Beratung und Kontakt
Startdaten und Anmeldung