Description: Neural networks have led to previously unimaginable advances in NLP engineering tasks. The main criticism against them from a linguistic point of view is that neural models – while fine for “language engineering tasks” – are thought of as being black boxes, and that their parameter opacity prevents us from discovering new facts about the nature of language itself, or specific languages. In this talk I will challenge that assumption to show that there are ways to uncover facts about language, even with a black box learner. I will discuss specific experiments with neural models and sound embeddings that reveal new information about the organization of sound systems in human languages (phonology), give us insight into the complexity of word-formation (morphology), give us models of why and when irregular forms – surely an inefficiency in a communication system – can persist over long periods of time (historical linguistics), and reveal what the boundaries of pattern learning is (how much information do we minimally need to learn a grammatical aspect of language such as its word inflection or sentence formation).
Speaker: Mans Hulden, University of Colorado.
When: Thursday, 8 November
12:00 – 1:00pm
Where:Ada Lovelace aretoan.
Leave a Reply