COVID update: Due to the pandemic the course is also available online. In addition to onsite attendance, the classes will be broadcasted live online. The practical labs are also available online, with split groups with one lecturer in each. Given our online teaching experience in the last months, we can offer a high-quality and engaging course, both at the theoretical and hands-on practical sessions.
Deep Learning neural network models have been successfully applied to natural language processing, and are now changing radically how we interact with machines (Siri, Amazon Alexa, Google Home, Skype translator, Google Translate, or the Google search engine). These models are able to infer a continuous representation for words and sentences, instead of using hand-engineered features as in other machine learning approaches. The seminar will introduce the main deep learning models used in natural language processing, allowing the attendees to gain hands-on understanding and implementation of them in Tensorflow.
This course is a 35 hour introduction to the main deep learning models used in text processing, covering the latest developments, including Transformers and pre-trained (multilingual) language models like GPT, BERT and XLMR. It combines theoretical and practical hands-on classes. Attendants will be able to understand and implement the models in Tensorflow.Addressed to professionals, researchers and students who want to understand and apply deep learning techniques to text. The practical part requires basic programming experience, a university-level course in computer science and experience in Python. Basic math skills (algebra or pre-calculus) are also needed.
Associate Professor at NYU, CIFAR Associate Fellow