Machine Learning and having it deep and structured 2018 Spring
Course Materials
Theory 1: Why Deep Structure?
| Topic | PPT | Video | Homework | |
|---|---|---|---|---|
| Theory 1-1: Can shallow network fit any function? | PPT | View | HW1-1 | |
| Theory 1-2: Potential of Deep | View | |||
| Theory 1-3: Is Deep better than Shallow? | View |
Theory 2: Optimization
| Topic | PPT | Video | Homework | |
|---|---|---|---|---|
| Theory 2-1: When Gradient is Zero | PPT | View | HW1-2 | |
| Theory 2-2: Deep Linear Network | View | |||
| Theory 2-3: Does Deep Network have Local Minima? | View | |||
| Theory 2-4: Geometry of Loss Surfaces (Conjecture) | View | |||
| Theory 2-5: Geometry of Loss Surfaces (Empirical) | View |
Theory 3: Generalization
| Topic | PPT | Video | Homework | |
|---|---|---|---|---|
| Theory 3-1: Capability of Generalization | PPT | View | HW1-3 | |
| Theory 3-2: Indicator of Generalization | PPT | View |
Special Network Structure
| Topic | PPT | Video | Homework | |
|---|---|---|---|---|
| Seq-to-seq Learning | PPT | View | HW2-1 | |
| Pointer Network | PPT | View | ||
| Recursive Network | PPT | View | HW2-2 | |
| Attention-based Model | PPT | View |
Generative Adversarial Network (GAN)
| Topic | PPT | Video | Homework | |
|---|---|---|---|---|
| Introduction | PPT | View | HW3-1 | |
| Conditional GAN | PPT | View | ||
| Unsupervised Conditional GAN | PPT | View | ||
| Theory | PPT | View | HW3-2 and tips | |
| General Framework | PPT | View | ||
| WGAN, EBGAN | PPT | View | ||
| InfoGAN, VAE-GAN, BiGAN | PPT | View | HW3-1 | |
| Application to Photo Editing | PPT | View | ||
| Application to Sequence Generation | PPT | View | ||
| Application to Speech (by Dr. Yu Tsao) | PPT | |||
| Evaluation of GAN | PPT | View |