The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.

Abed Abud, A., Abi, B., Acciarri, R., Acero, M., Adames, M., Adamov, G., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. JOURNAL OF INSTRUMENTATION, 18(4) [10.1088/1748-0221/18/04/P04034].

Highly-parallelized simulation of a pixelated LArTPC on a GPU

Biassoni M.;Bonesini M.;Branca A.;Brizzolari C.;Brunetti G.;Carniti P.;Falcone A.;Gotti C.;Guffanti D.;Minotti A.;Parozzi E.;Pessina G.;Spanu M.;Terranova F.;Torti M.;
2023

Abstract

The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 103 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
Articolo in rivista - Articolo scientifico
Detector modelling and simulations II (electric fields, charge transport, multiplication and induction, pulse formation, electron emission, etc); Noble liquid detectors (scintillation, ionization, double-phase; Simulation methods and programs; Time projection Chambers (TPC)
English
26-apr-2023
2023
18
4
P04034
open
Abed Abud, A., Abi, B., Acciarri, R., Acero, M., Adames, M., Adamov, G., et al. (2023). Highly-parallelized simulation of a pixelated LArTPC on a GPU. JOURNAL OF INSTRUMENTATION, 18(4) [10.1088/1748-0221/18/04/P04034].
File in questo prodotto:
File Dimensione Formato  
10281-437606_VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 15.68 MB
Formato Adobe PDF
15.68 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/437606
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 1
Social impact