Torch Training by Markus Liedl
It features flexible GPU computations at great speed. Many reusable modules are available. It eases development of complicated models. Its great architecture lets you rearrange existing modules in many ways. Through Lua FFI it can be readily extended in other languages. And it doesn't compile dataflow graphs, it's just executing them, thus there is no compilation pause when you start a script.
The Torch Training comprises five days in which you'll learn everything to use Torch productively:
- Lua language basics
- Torch tensor basics (views, memory sharing, subtensors, math, aggregations)
- fully connected layers and convolutional layers for processing images
- different non-linearities
- optimizing a model with the
optimmodule (SGD, ADAM, ...)
- batch/weight/layer normalization and Why?
- running Torch networks on the GPU
- How does Torch modularize the backpropagation algorithm?
- extend Torch with your own application specific modules
- improve a networks generalisation with dropout, noise, and appropriate data augmentations
- lots of exercises
The networks in the course target supervised learning for classification and regression.
If you are interested in the training mail me at firstname.lastname@example.org Normally the training will happen in or near Munich, Germany. I'm also offering to join your team and work together with you on specific topics.
This course is available in English and German.
Related Deep Learning courses are