The contens below is about the SCAI’17. Click here to go to the current workshop.
About
There is a gradual shift towards searching and presenting the information in a conversational form. Chatbots, personal assistants in our phones and eyes-free devices are being used increasingly more for different purposes, including information retrieval and exploration. On the other side, information retrieval empowers dialogue systems to answer questions and to get context for assisting user in her tasks. With the recent success of deep learning in different areas of natural language processing, this appears to be the right foundation to power search conversationalization.
While there is significant progress in building goal-oriented dialogue systems and open-domain chit-chat bots, more remains to be done for theory and practice of conversation-based search and search-based dialogues.
This workshop aims to bring together AI/Deep Learning specialists on one hand and search/IR specialists on the other hand to lay the ground for search-oriented conversational AI and establish future directions and collaborations.
Topics of Interest
- Surfacing search results in form of a dialogue (how to present information that search gives us in a form of a dialogue? Which model to use for dialogue-state tracking?)
- Evaluation of search-oriented conversational AI: despite early attempts at
computing dialogue system’s quality in a scalable way
- From conversational AI to personal assistants (how to maintain a stable and consistent assistant behavior)
- The role of personalization for conversational AI and for its evaluation (users are different, can we personalize their experience?)
- Deep Learning for conversational AI
- (Deep) Reinforcement Learning for conversational AI
- Voice as input (voice interactions with a personal assistant: how it will affect existing models?)
Submission should be between two and six pages in the ACM format to the following address:
https://easychair.org/conferences/?conf=scai17
We explicitly seek the following types of papers
- conceptual papers, proposals for panel discussion
- early experimental results
- preliminary results of convai.io competition
Important Dates
- Submission:
Series 1: Submission Deadline - July 15 and Notification - August 1
Series 2: Submission Deadline - August 22 (extended) and Notification - September 7
Deadline for Camera ready version - September 15
- The workshop - October 1
Organizers
Program Committee
- Mostafa Dehghani, University of Amsterdam
- Jaap Kamps, University of Amsterdam
- Tom Kenter, University of Amsterdam
- Scott Roy, Google
- Ryen W. White, Microsoft Research
- Hosein Azarbonyad, University of Amsterdam
- Valentin Malykh, Institute for Systems Analysis of Russian Academy of Sciences
- Evgeny Kharitonov, Criteo
- Maksim Kretov, MIPT
- Nikita Smetanin, Replika.AI
- Invited Speakers and Oral Presentation
- Panel Discussion. You can send us your suggestions here.
- Breakout Session to plan a roadmap for Conversational AI. We will send more information for participants
- Poster Session
Invited Speakers
- Dilek Hakkani-Tür, Google Research, Mountain View
- Title: Deep Learning for Goal-Oriented Conversational Understanding
- Abstarct: Recent advances in deep learning based approaches enable exciting new research frontiers for conversational systems. In this talk, I will present an end-to-end goal-oriented dialogue system, with components for language understanding, dialogue state tracking, policy, and language generation. These can be independently built and jointly optimized for dialogue quality and efficient task completion using supervised or reinforcement learning methods. The talk will summarize novel aspects of each component, and highlight remaining issues and challenges towards building human-level conversational systems.
- Filip Radlinski, Google Research, London
- Title: A Theoretical Framework for Conversational Search
- Abstarct: I will present a theory and model of information interaction for conversational information retrieval. In particular, consider the question of what properties would be desirable for a conversational information retrieval system, so that the system can allow users to answer a variety of information needs in a natural and efficient manner. I will describe a small set of properties that taken together could measure the extent to which a system is conversational, as well as a theoretical model of a conversational system that implements the properties.
- Michel Galley, Microsoft Research, Redmond
- Title: Grounding Neural Conversation Models into the Real World
- Abstarct: Neural conversation models are capable of generating natural sounding conversational interactions on a wide variety of topics. However, such fully data-driven models have been mostly applied to casual scenarios (e.g., “chit-chat”) and have yet to demonstrate they can serve in more useful conversational applications. In this talk, I will present recent work on large-scale and open-domain neural conversation models grounded in external sources (e.g. textual knowledge bases, personalization data, images) that help produce more informative, contentful, and personalized responses.
- Slides
- Fabrizio Silvestri, Facebook, London
- Title: Search at FB
- Abstarct: Search is a very important service that everybody uses daily. Facebook is investing heavily in search and in this seminar, I will present an overview of search at FB along with some details about the projects that are carried out in the London Team. I will present the recent activities in the Query Alteration team, namely Query Rewriting, Speller, and Related Searches. In the talk I will present an overview of the current solutions adopted along with some of the research challenges that are peculiar to FB people.
- Ruslan Salakhutdinov, Apple AI & CMU, USA
- Title: Deep Learning for Reading Comprehension
- Abstarct: In this talk, I will discuss deep learning models that can find semantically meaningful representations of words, learn to read documents and answer questions about their content. I will first introduce the Gated-Attention (GA) Reader model that integrates a multi-hop architecture with a novel attention mechanism based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. Time permits, I will briefly introduce a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. I will show that on several tasks, these models improve upon many of the existing techniques.
- Slides
Industry Speakers
- Frode Sørmo, Amazon Alexa, London
- Title: Alexa: Speak with your Computer – Naturally
- Abstarct: This talk will cover an introduction to Alexa, the brain behind devices such as the Echo. Alexa is designed around the idea that machines should learn to communicate like us, enabling customers to interact with devices in a more intuitive way using voice. Examples of these skills include the ability to play music, answer general questions, set an alarm or timer and more.
- Gary Ren, Xiaochuan Ni, Manish Malik, Qifa Ke, Microsoft AI and Research, Sunnyvale, CA
- Title: Conversational/Multiturn Question Understanding
- Abstract: Existing research on question understanding and answering have focused on standalone questions. However, as interactions between humans and machines become increasingly convers tional, there is a need for understanding conversational/multiturn questions, defined here as questions that depend on the context of the current conversation. This paper presents a novel architecture that leverages NLP techniques, deep learning, and search engine web knowledge to understand these multiturn questions by reformulating them into standalone questions that a downstream information retrieval system/dialogue agent expects. This paper also briefly explores the benefits of having a search powered system that can have guided conversations with users.
- Slides
- Nikita Smetanin, Replika.AI, Moscow, Russia
- Title: Building an Emotional conversation using Deep Learning
- Abstract: Retrieval-based conversation systems generally tend to rank high responses that are semantically similar, or even identical, to the given conversation context. While the system’s goal is to find the most relevant response, rather than semantically similar, this tendency results in low-quality responses (this challenge can be referred to as the echoing problem). To minimize this effect, we apply a hard negative mining approach at the training stage. The evaluation shows that the resulting model avoids echoing the context and achieves the best quality metrics on the benchmarks.
- Slides
Panel Disscusion: participants
- Maarten de Rijke, University of Amsterdam, the Netherlands
- Michel Galley, Microsoft Research, Redmond, US
- Claudia Hauff, Delft University of Technology, the Netherlands
- Jeff Dalton, University of Glasgow, Glasgow, Scotland, UK
Accepted Papers
- Task-Oriented Query Reformulation with Reinforcement Learning by Rodrigo Nogueira and Kyunghyun Cho
- Will this dialogue be unsuccessful? Prediction using audio features by Margarita Kotti, Alexandros Papangelis and Yannis Stylianou
- LD-SDS: Towards an Expressive Spoken Dialogue System based on Linked-Data by Alexandros Papangelis, Panagiotis Papadakos, Margarita Kotti, Yannis Stylianou, Yannis Tzitzikas and Dimitris Plexousakis
- Combining Search with Structured Data to Create a More Delightful User Experience in Open Domain Dialogue by Kevin Bowden, Shereen Oraby, Jiaqi Wu, Amita Misra and Marilyn Walker
- Voice-based Data Exploration: Chatting with your Database by Carsten Binnig, Ugur Cetinemel, Prasetya Utama and Nathaniel Weir
- Conversational Exploratory Search via Interactive Storytelling by Svitlana Vakulenko, Ilya Markov and Maarten de Rijke
Posters
Workshop Schedule
The detailed schedule of the workshop can be found HERE