EMNLP-2020 Tutorial

Representation, Learning and Reasoning on Spatial Language for Downstream NLP Tasks

Slides:

  1. Part 1
  2. Part 2
  3. Part 3
  4. Part 4

Abstract:

Understating spatial semantics expressed in natural language can become highly complex in real-world applications. This includes applications of language grounding, navigation, visual question answering, and more generic human-machine interaction and dialogue systems. In many of such downstream tasks, explicit representation of spatial concepts and relationships can improve the capabilities of machine learning models in reasoning and deep language understanding. In this tutorial, we overview the cutting-edge research results and existing challenges related to spatial language understanding including semantic annotations, existing corpora, symbolic and sub-symbolic representations, qualitative spatial reasoning, spatial common sense, deep and structured learning models. We discuss the recent results on the above-mentioned applications --that need spatial language learning and reasoning -- and highlight the research gaps and future directions.

Speakers:

Parisa Kordjamshidi

Michigan State University, kordjams@msu.edu

Parisa Kordjamshidi is Assistant Professor of Computer Science Department at Michigan State University. Her research interests are in Natural language Processing and Machine Learning. She has been working on spatial semantics extraction and annotation schemes, mapping language to formal spatial representations, spatial ontologies, structured output prediction models for information extraction, combining vision and language for spatial language understanding. She was awarded an NSF CAREER award in Feb 2019 to work on combining learning and reasoning for spatial language understanding. On the machine learning side she is working on integration of domain knowledge in neural models. She is the PI of an ONR project on Declarative learning based programming for the integration of domain knowledge in learning. Further related to the topic of this tutorial, she has been organizing/co-organizing shared tasks on Spatial role labeling, SpRL-2012, SpRL-2013 and the Space Evaluation workshop, SpaceEval-2015, in SemEval Series and Multimodal spatial role labeling workshop mSpRL at CLEF-2017 with the goal of considering vision and language media for spatial information extraction and organized SpLU-2018, and Robonlp-SpLU-2019, SpLU-2020 collocated with NAACL-18, NAACL-2019 and EMNLP-2020 respectively.


James Pustejovsky

Brandeis University, jamesp@cs.brandeis.edu

James Pustejovsky is the TJX FeldbergChair in Computer Science at Brandeis University, where he is also Chair of the Linguistics Program, Chair of the Computational Linguistics MA Program, and Director of the Lab for Linguistics and Computation. He received his B.S. from MIT and his Ph.D. from UMASS at Amherst. He has worked on computational and lexical semantics for 25 years and is chief developer of Generative Lexicon Theory. Since 2002, he has been working on the development of a platform for temporal reasoning in language, called TARSQI(www.tarsqi.org). Pustejovsky is chief architect of TimeML and ISO-TimeML, a recently adopted ISO standard for temporal information in language, as well as the recently adopted standard, ISO-Space, a specification for spatial information in language. He has developed a modeling framework for representing linguistic expressions and interactions as multimodal simulations. This platform, VoxML, enables real-time communication between humans and computers or robots for joint tasks, utilizing speech, gesture, gaze, and action. He is currently working with robotics researchers in HRI to allow the VoxML platform to act as both a dialogue management system as well as a simulation environment that reveals realtime epistemic state and perceptual input to a computational agent. His areas of interest include: Computational semantics, temporal and spatial reasoning, language annotation for machine.


Marie-Francine Moens

KU Leuven, sien.moens@cs.kuleuven.be

Marie-Francine Moens is Full Professor at the Department of Computer Science, KU Leuven. She has a special interest in machine learning for natural language understanding and in grounding language in a visual context. She is holder of the prestigious ERCAdvanced Grant CALCULUS (2018-2023) granted by the European Research Council on the topic of language understanding. She is currently associate editor of the journal IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). In 2011 and 2012 she was appointed as chair of the European Chapter of the Association for Computational Linguistics (EACL) and was a member of the executive board of the Association for Computational Linguistics (ACL). From 2014 till 2018 she was the scientific manager of the EU COST action iVL Net (The European Net-work on Integrating Vision and Language).


Thanks to Tim Moran for creating this webpage.