Deep Learning neural network models have been successfully applied to natural language processing, and are now changing radically how we interact with machines (Siri, Amazon Alexa, Google Home, Skype translator, Google Translate, or the Google search engine). These models are able to infer a continuous representation for words and sentences, instead of using hand-engineered features as in other machine learning approaches. The seminar will introduce the main deep learning models used in natural language processing, allowing the attendees to gain hands-on understanding and implementation of them in Tensorflow.
Syllabus
Introduction to machine learning and NLP with Tensorflow
Multilayer Perceptron
Word embeddings and recurrent neural networks
Seq2seq, neural machine translation and better RNNs
Attention, Neural machine Translation and Natural Language Inference
Transfer learning
Bridging the gap between natural languages and the visual world