In the realm of dialogue systems, user simulation techniques have emerged as a gamechanger, redefining the evaluation and enhancement of task-oriented dialogue (TOD) systems. These methods are crucial for replicating real user interactions, enabling applications like synthetic data augmentation, error detection, and robust evaluation. However, existing approaches often rely on rigid rule-based methods or on annotated data. This paper introduces DAUS, a Domain-Aware User Simulator. Leveraging large language models, we fine-tune DAUS on real examples of task-oriented dialogues. Results on two relevant benchmarks showcase significant improvements in terms of user goal fulfillment. Notably, we have observed that fine-tuning enhances the simulator's coherence with user goals, effectively mitigating hallucinations - a major source of inconsistencies in simulator responses.
Sekulic, I., Terragni, S., Guimaraes, V., Khau, N., Guedes, B., Filipavicius, M., et al. (2024). Reliable LLM-based User Simulator for Task-Oriented Dialogue Systems. In SCI-CHAT 2024 - Workshop on Simulating Conversational Intelligence in Chat, Proceedings of the Workshop (pp.19-35). Association for Computational Linguistics (ACL).
Reliable LLM-based User Simulator for Task-Oriented Dialogue Systems
Terragni S.;
2024
Abstract
In the realm of dialogue systems, user simulation techniques have emerged as a gamechanger, redefining the evaluation and enhancement of task-oriented dialogue (TOD) systems. These methods are crucial for replicating real user interactions, enabling applications like synthetic data augmentation, error detection, and robust evaluation. However, existing approaches often rely on rigid rule-based methods or on annotated data. This paper introduces DAUS, a Domain-Aware User Simulator. Leveraging large language models, we fine-tune DAUS on real examples of task-oriented dialogues. Results on two relevant benchmarks showcase significant improvements in terms of user goal fulfillment. Notably, we have observed that fine-tuning enhances the simulator's coherence with user goals, effectively mitigating hallucinations - a major source of inconsistencies in simulator responses.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.