Speaker

Hao Zhou, Researcher in Artificial Intelligence Laboratory, Tiktok. 

Time:

2020.11.25 16:00-17:30

Abstract

Natural language generation has been a fundamental technology in many applications such as machine writing, machine translation, chatbots, etc.

In this talk, we will begin from the taxonomy of deep generative models for text generation, then introduce our recent work in different branches. State-of-the-art text generation models employ neural networks such as RNN or Transformer to parameterize the density of text in an auto-regressive fashion, because the density of sentences is intractable for its exponential space. We will first introduce some advanced approaches to better factorize the density. Then we turn to the variational auto-encoders (VAE), which approximates the density of sentences with variational inference. Our recent work incorporates syntax latent variables to improve the quality of texts from VAE. We also propose a variational approach for interpretable text generation. Finally, different to previous approaches with explicit density of sentences, we explore a novel Markov Chain Monte Carlo approach called CGMH for constrained text generation, which does not keep an explicit density of sentences and generates sentences by abandoning the left-to-right fashion.

Bio

Dr. Hao Zhou is a senior researcher at ByteDance AI Lab. Hao Zhou obtained his Ph.D. from computer science department of Nanjing University in 2017, and he was the recipient of Chinese Association of Artificial Intelligence 2019 Doctoral Dissertation Award. His research interests are machine learning and its applications for natural language processing. Currently he focuses on deep generative models for NLP. He has served in the Program Committee for ACL, EMNLP, NeurIPS, etc. He has more than 30 publications in prestigious conferences and journals, including ACL, EMNLP, NAACL, TACL, NeurIPS and ICLR. He has given several tutorials on NLP conference like EMNLP and NLPCC.