Eintrag weiter verarbeiten
Hardware Accelerated ATLAS Workloads on the WLCG Grid
Gespeichert in:
Zeitschriftentitel: | Journal of Physics: Conference Series |
---|---|
Personen und Körperschaften: | , , |
In: | Journal of Physics: Conference Series, 1525, 2020, 1, S. 012059 |
Format: | E-Article |
Sprache: | Unbestimmt |
veröffentlicht: |
IOP Publishing
|
Schlagwörter: |
author_facet |
Forti, A C Heinrich, L Guth, M Forti, A C Heinrich, L Guth, M |
---|---|
author |
Forti, A C Heinrich, L Guth, M |
spellingShingle |
Forti, A C Heinrich, L Guth, M Journal of Physics: Conference Series Hardware Accelerated ATLAS Workloads on the WLCG Grid General Physics and Astronomy |
author_sort |
forti, a c |
spelling |
Forti, A C Heinrich, L Guth, M 1742-6588 1742-6596 IOP Publishing General Physics and Astronomy http://dx.doi.org/10.1088/1742-6596/1525/1/012059 <jats:title>Abstract</jats:title> <jats:p>In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained, as well as suitable hardware, such as graphic or tensor processing units, which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG (Worldwide LHC Computing Grid). In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of Linux container images holding a modern data science software stack. A frequent use-case in the development of machine learning algorithms is the optimization of neural networks through the tuning of their Hyper Parameters (HP). For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel – a workload well suited for Grid computing. An example of such a hyper-parameter scan on Grid resources for the case of flavor tagging within ATLAS is presented.</jats:p> Hardware Accelerated ATLAS Workloads on the WLCG Grid Journal of Physics: Conference Series |
doi_str_mv |
10.1088/1742-6596/1525/1/012059 |
facet_avail |
Online Free |
format |
ElectronicArticle |
fullrecord |
blob:ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMTA4OC8xNzQyLTY1OTYvMTUyNS8xLzAxMjA1OQ |
id |
ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMTA4OC8xNzQyLTY1OTYvMTUyNS8xLzAxMjA1OQ |
institution |
DE-D275 DE-Bn3 DE-Brt1 DE-D161 DE-Zwi2 DE-Gla1 DE-Zi4 DE-15 DE-Pl11 DE-Rs1 DE-105 DE-14 DE-Ch1 DE-L229 |
imprint |
IOP Publishing, 2020 |
imprint_str_mv |
IOP Publishing, 2020 |
issn |
1742-6596 1742-6588 |
issn_str_mv |
1742-6596 1742-6588 |
language |
Undetermined |
mega_collection |
IOP Publishing (CrossRef) |
match_str |
forti2020hardwareacceleratedatlasworkloadsonthewlcggrid |
publishDateSort |
2020 |
publisher |
IOP Publishing |
recordtype |
ai |
record_format |
ai |
series |
Journal of Physics: Conference Series |
source_id |
49 |
title |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_unstemmed |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_full |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_fullStr |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_full_unstemmed |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_short |
Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_sort |
hardware accelerated atlas workloads on the wlcg grid |
topic |
General Physics and Astronomy |
url |
http://dx.doi.org/10.1088/1742-6596/1525/1/012059 |
publishDate |
2020 |
physical |
012059 |
description |
<jats:title>Abstract</jats:title>
<jats:p>In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained, as well as suitable hardware, such as graphic or tensor processing units, which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG (Worldwide LHC Computing Grid). In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of Linux container images holding a modern data science software stack. A frequent use-case in the development of machine learning algorithms is the optimization of neural networks through the tuning of their Hyper Parameters (HP). For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel – a workload well suited for Grid computing. An example of such a hyper-parameter scan on Grid resources for the case of flavor tagging within ATLAS is presented.</jats:p> |
container_issue |
1 |
container_start_page |
0 |
container_title |
Journal of Physics: Conference Series |
container_volume |
1525 |
format_de105 |
Article, E-Article |
format_de14 |
Article, E-Article |
format_de15 |
Article, E-Article |
format_de520 |
Article, E-Article |
format_de540 |
Article, E-Article |
format_dech1 |
Article, E-Article |
format_ded117 |
Article, E-Article |
format_degla1 |
E-Article |
format_del152 |
Buch |
format_del189 |
Article, E-Article |
format_dezi4 |
Article |
format_dezwi2 |
Article, E-Article |
format_finc |
Article, E-Article |
format_nrw |
Article, E-Article |
_version_ |
1792322620872458243 |
geogr_code |
not assigned |
last_indexed |
2024-03-01T11:20:49.752Z |
geogr_code_person |
not assigned |
openURL |
url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fvufind.svn.sourceforge.net%3Agenerator&rft.title=Hardware+Accelerated+ATLAS+Workloads+on+the+WLCG+Grid&rft.date=2020-04-01&genre=article&issn=1742-6596&volume=1525&issue=1&pages=012059&jtitle=Journal+of+Physics%3A+Conference+Series&atitle=Hardware+Accelerated+ATLAS+Workloads+on+the+WLCG+Grid&aulast=Guth&aufirst=M&rft_id=info%3Adoi%2F10.1088%2F1742-6596%2F1525%2F1%2F012059&rft.language%5B0%5D=und |
SOLR | |
_version_ | 1792322620872458243 |
author | Forti, A C, Heinrich, L, Guth, M |
author_facet | Forti, A C, Heinrich, L, Guth, M, Forti, A C, Heinrich, L, Guth, M |
author_sort | forti, a c |
container_issue | 1 |
container_start_page | 0 |
container_title | Journal of Physics: Conference Series |
container_volume | 1525 |
description | <jats:title>Abstract</jats:title> <jats:p>In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained, as well as suitable hardware, such as graphic or tensor processing units, which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG (Worldwide LHC Computing Grid). In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of Linux container images holding a modern data science software stack. A frequent use-case in the development of machine learning algorithms is the optimization of neural networks through the tuning of their Hyper Parameters (HP). For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel – a workload well suited for Grid computing. An example of such a hyper-parameter scan on Grid resources for the case of flavor tagging within ATLAS is presented.</jats:p> |
doi_str_mv | 10.1088/1742-6596/1525/1/012059 |
facet_avail | Online, Free |
format | ElectronicArticle |
format_de105 | Article, E-Article |
format_de14 | Article, E-Article |
format_de15 | Article, E-Article |
format_de520 | Article, E-Article |
format_de540 | Article, E-Article |
format_dech1 | Article, E-Article |
format_ded117 | Article, E-Article |
format_degla1 | E-Article |
format_del152 | Buch |
format_del189 | Article, E-Article |
format_dezi4 | Article |
format_dezwi2 | Article, E-Article |
format_finc | Article, E-Article |
format_nrw | Article, E-Article |
geogr_code | not assigned |
geogr_code_person | not assigned |
id | ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMTA4OC8xNzQyLTY1OTYvMTUyNS8xLzAxMjA1OQ |
imprint | IOP Publishing, 2020 |
imprint_str_mv | IOP Publishing, 2020 |
institution | DE-D275, DE-Bn3, DE-Brt1, DE-D161, DE-Zwi2, DE-Gla1, DE-Zi4, DE-15, DE-Pl11, DE-Rs1, DE-105, DE-14, DE-Ch1, DE-L229 |
issn | 1742-6596, 1742-6588 |
issn_str_mv | 1742-6596, 1742-6588 |
language | Undetermined |
last_indexed | 2024-03-01T11:20:49.752Z |
match_str | forti2020hardwareacceleratedatlasworkloadsonthewlcggrid |
mega_collection | IOP Publishing (CrossRef) |
physical | 012059 |
publishDate | 2020 |
publishDateSort | 2020 |
publisher | IOP Publishing |
record_format | ai |
recordtype | ai |
series | Journal of Physics: Conference Series |
source_id | 49 |
spelling | Forti, A C Heinrich, L Guth, M 1742-6588 1742-6596 IOP Publishing General Physics and Astronomy http://dx.doi.org/10.1088/1742-6596/1525/1/012059 <jats:title>Abstract</jats:title> <jats:p>In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained, as well as suitable hardware, such as graphic or tensor processing units, which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG (Worldwide LHC Computing Grid). In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of Linux container images holding a modern data science software stack. A frequent use-case in the development of machine learning algorithms is the optimization of neural networks through the tuning of their Hyper Parameters (HP). For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel – a workload well suited for Grid computing. An example of such a hyper-parameter scan on Grid resources for the case of flavor tagging within ATLAS is presented.</jats:p> Hardware Accelerated ATLAS Workloads on the WLCG Grid Journal of Physics: Conference Series |
spellingShingle | Forti, A C, Heinrich, L, Guth, M, Journal of Physics: Conference Series, Hardware Accelerated ATLAS Workloads on the WLCG Grid, General Physics and Astronomy |
title | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_full | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_fullStr | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_full_unstemmed | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_short | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
title_sort | hardware accelerated atlas workloads on the wlcg grid |
title_unstemmed | Hardware Accelerated ATLAS Workloads on the WLCG Grid |
topic | General Physics and Astronomy |
url | http://dx.doi.org/10.1088/1742-6596/1525/1/012059 |