Session 1. Natural Language Processing
Natural Language Understanding and Conversational AI
Natural language processing (NLP) has made dramatic advances over the last three years, ranging from deep generative models for text-to-speech, such as WaveNet, through the extensive deployment of deep contextual language models, such as BERT. Pre-training with models like BERT has significantly raised the performance of almost all NLP tasks, allowed much better domain adaptation, and brought us human-level performance for tasks like answering straightforward factual questions. New neural language models have also brought much more fluent language generation. On the one hand, we should not be too impressed by these linguistic savants: Things like understanding the consequences of events in a story or performing common sense reasoning remain out of reach. But on the other hand, I will discuss how we now live in an era where there are many good commercial uses of NLP, with much of the heavy lifting already done in the construction of large but downloadable
models. I present some of our work on understanding how these models learn to be so proficient, and how we can build new types of pre-trained models that are much more compute efficient. Finally, I turn to conversational agents, where neural models can produce accurate task-based dialog agents and more effective open domain social bots.
Speaker Introduction
Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Human-Centered Artificial Intelligence Institute (HAI). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on the GloVe model of word vectors, question answering, tree-recursive neural networks, machine reasoning, neural network dependency parsing, neural machine translation, sentiment analysis, and deep language understanding. He also focuses on computational linguistic approaches to parsing, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and
Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.