COMPSs GPU DNN Distributed Training
Version 1

Workflow Type: COMPSs
Stable

Name: Dislib Distributed Training - Cache OFF
Contact Person: cristian.tatu@bsc.es
Access Level: public
License Agreement: Apache2
Platform: COMPSs
Machine: Minotauro-MN4

PyTorch distributed training of CNN on GPU.
Launched using 32 GPUs (16 nodes).
Dataset: Imagenet
Version dislib-0.9
Version PyTorch 1.7.1+cu101

Average task execution time: 84 seconds

Click and drag the diagram to pan, double click or use the controls to zoom.

Version History

Version 1 (earliest) Created 25th Mar 2024 at 11:20 by Cristian Tatu

No revision comments

Frozen Version-1 c1fe38f
help Creators and Submitter
Creator
Additional credit

The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter
Citation
Tatu, C. (2024). COMPSs GPU DNN Distributed Training. WorkflowHub. https://doi.org/10.48546/WORKFLOWHUB.WORKFLOW.801.1
Activity

Views: 1047   Downloads: 323

Created: 25th Mar 2024 at 11:20

Last updated: 25th Mar 2024 at 11:26

Annotated Properties
Topic annotations
help Attributions

None

Total size: 201 KB
Powered by
(v.1.16.0-main)
Copyright © 2008 - 2024 The University of Manchester and HITS gGmbH