Natural language processing and other areas of artificial intelligence have seen staggering progress in recent years, yet much of this is reported with reference to somewhat limited benchmark datasets. We see the deployment of these techniques in realistic use cases as the next step in this development. In particular, much progress is still needed in educational settings, which can strongly improve users’ safety on social media. We present our efforts to develop multi-modal machine learning algorithms to be integrated into a social media companion aimed at supporting and educating users in dealing with fake news and other social media threats. Inside the companion environment, such algorithms can automatically assess and enable users to contextualize different aspects of their social media experience. They can estimate and display different characteristics of content in supported users’ feeds, such as ‘fakeness’ and ‘sentiment’, and suggest related alternatives to enrich users’ perspectives. In addition, they can evaluate the opinions, attitudes, and neighbourhoods of the users and of those appearing in their feeds. The aim of the latter process is to raise users’ awareness and resilience to filter bubbles and echo chambers, which are almost unnoticeable and rarely understood phenomena that may affect users’ information intake unconsciously and are unexpectedly widespread. The social media environment is rapidly changing and complex. While our algorithms show state-of-the-art performance, they rely on task-specific datasets, and their reliability may decrease over time and be limited against novel threats. The negative impact of these limits may be exasperated by users’ over-reliance on algorithmic tools. Therefore, companion algorithms and educational activities are meant to increase users’ awareness of social media threats while exposing the limits of such algorithms. This will also provide an educational example of the limits affecting the machine-learning components of social media platforms. We aim to devise, implement and test the impact of the companion and connected educational activities in acquiring and supporting conscientious and autonomous social media usage.
Ognibene, D., Donabauer, G., Theophilou, E., Bursic, S., Lomonaco, F., Wilkens, R., et al. (2023). Moving Beyond Benchmarks and Competitions: Towards Addressing Social Media Challenges in an Educational Context. DATENBANK-SPEKTRUM, 23(1), 27-39 [10.1007/s13222-023-00436-3].
Moving Beyond Benchmarks and Competitions: Towards Addressing Social Media Challenges in an Educational Context
Dimitri Ognibene
Primo
;Gregor Donabauer;Sathya Bursic;Francesco Lomonaco;Rodrigo Wilkens;
2023
Abstract
Natural language processing and other areas of artificial intelligence have seen staggering progress in recent years, yet much of this is reported with reference to somewhat limited benchmark datasets. We see the deployment of these techniques in realistic use cases as the next step in this development. In particular, much progress is still needed in educational settings, which can strongly improve users’ safety on social media. We present our efforts to develop multi-modal machine learning algorithms to be integrated into a social media companion aimed at supporting and educating users in dealing with fake news and other social media threats. Inside the companion environment, such algorithms can automatically assess and enable users to contextualize different aspects of their social media experience. They can estimate and display different characteristics of content in supported users’ feeds, such as ‘fakeness’ and ‘sentiment’, and suggest related alternatives to enrich users’ perspectives. In addition, they can evaluate the opinions, attitudes, and neighbourhoods of the users and of those appearing in their feeds. The aim of the latter process is to raise users’ awareness and resilience to filter bubbles and echo chambers, which are almost unnoticeable and rarely understood phenomena that may affect users’ information intake unconsciously and are unexpectedly widespread. The social media environment is rapidly changing and complex. While our algorithms show state-of-the-art performance, they rely on task-specific datasets, and their reliability may decrease over time and be limited against novel threats. The negative impact of these limits may be exasperated by users’ over-reliance on algorithmic tools. Therefore, companion algorithms and educational activities are meant to increase users’ awareness of social media threats while exposing the limits of such algorithms. This will also provide an educational example of the limits affecting the machine-learning components of social media platforms. We aim to devise, implement and test the impact of the companion and connected educational activities in acquiring and supporting conscientious and autonomous social media usage.File | Dimensione | Formato | |
---|---|---|---|
Ognibene-2023-Datenbank Spektrum-AAM.pdf
accesso aperto
Tipologia di allegato:
Author’s Accepted Manuscript, AAM (Post-print)
Licenza:
Altro
Dimensione
6.73 MB
Formato
Adobe PDF
|
6.73 MB | Adobe PDF | Visualizza/Apri |
10281-402916_VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
2.53 MB
Formato
Adobe PDF
|
2.53 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.