We present BEEP (BEst DrivEr's License Performer), a benchmark challenge to evaluate large language models in the context of a simulated Italian driver's license exam. This challenge tests the models' ability to understand and apply traffic laws, road safety regulations, and vehicle-related knowledge through a series of true/false questions. The dataset is derived from official ministerial materials used in the Italian licensing process, specifically targeting Category B licenses. We evaluate models such as LLaMA and Mixtral across multiple categories. In addition, we simulate a driving license test to assess the models' real-world applicability, where the pass rate is determined based on the number of errors allowed. While scaling up model size improved performance, even larger models struggled to pass the exam consistently. The challenge demonstrates the capabilities and limitations of LLMs in handling real-world, high-stakes scenarios, providing insights into their practical use and areas for further improvement.
Mercorio, F., Potertì, D., Serino, A., Seveso, A. (2024). BEEP - BEst DrivEr's License Performer: A CALAMITA Challenge. In Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024). CEUR-WS.
BEEP - BEst DrivEr's License Performer: A CALAMITA Challenge
Mercorio F.;Potertì D.;Serino A.;Seveso A.
2024
Abstract
We present BEEP (BEst DrivEr's License Performer), a benchmark challenge to evaluate large language models in the context of a simulated Italian driver's license exam. This challenge tests the models' ability to understand and apply traffic laws, road safety regulations, and vehicle-related knowledge through a series of true/false questions. The dataset is derived from official ministerial materials used in the Italian licensing process, specifically targeting Category B licenses. We evaluate models such as LLaMA and Mixtral across multiple categories. In addition, we simulate a driving license test to assess the models' real-world applicability, where the pass rate is determined based on the number of errors allowed. While scaling up model size improved performance, even larger models struggled to pass the exam consistently. The challenge demonstrates the capabilities and limitations of LLMs in handling real-world, high-stakes scenarios, providing insights into their practical use and areas for further improvement.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.