Conversational Agents are the future of Human-Computer Interaction. Technological advancements in Artificial Intelligence and Natural Language Processing allow the development of Conversational Agents that support increasingly complex tasks. When the complexity increases, the conversation alone is no more sufficient to support the interaction effectively, but other modalities must be integrated to relieve the cognitive burden for the final user. To this aim, we define and discuss a set of design principles to create effective multi-modal Conversational Agents. We start from the best practices in literature for multi-modal interaction and uni-modal Conversational Interfaces to see how they apply in our context. Then, we validate our results with an empirical evaluation. Our work sheds light on a largely unexplored field and inspires the future design of such interfaces.
Crovari, P., Pidó, S., Garzotto, F., Ceri, S. (2021). Show, Don’t Tell. Reflections on the Design of Multi-modal Conversational Interfaces. In Chatbot Research and Design 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers (pp.64-77). Springer Cham [10.1007/978-3-030-68288-0_5].
Show, Don’t Tell. Reflections on the Design of Multi-modal Conversational Interfaces
Garzotto, Franca;
2021
Abstract
Conversational Agents are the future of Human-Computer Interaction. Technological advancements in Artificial Intelligence and Natural Language Processing allow the development of Conversational Agents that support increasingly complex tasks. When the complexity increases, the conversation alone is no more sufficient to support the interaction effectively, but other modalities must be integrated to relieve the cognitive burden for the final user. To this aim, we define and discuss a set of design principles to create effective multi-modal Conversational Agents. We start from the best practices in literature for multi-modal interaction and uni-modal Conversational Interfaces to see how they apply in our context. Then, we validate our results with an empirical evaluation. Our work sheds light on a largely unexplored field and inspires the future design of such interfaces.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.