Presentations
Note that some of the older material contains dated information.
古い資料の一部には古い情報が含まれていることに注意してください。
Presentations from Slurm User Group Meeting, September 2019
- Welcome, Danny Auble, SchedMD
- Tutorial: TRES and Banking, Brian Christiansen, SchedMD
- Technical: GPU Scheduling and the cons_tres plugin, Chad Vizino and Morris Jette, SchedMD
- Site Report: LANL, Joseph 'Joshi' Fullop, LANL
- Tutorial: Cgroups and pam_slurm_adopt, Marshall Garey, SchedMD
- Site Report: Enabling and Scaling Diverse Work Loads Efficiently With Slurm, Chansup Byun et al., MIT Lincoln Laboratory
- Tutorial: Priority and Fair Trees, Shawn Hoopes, SchedMD
- Technical: Slurm: Seamless Integration With Unprivileged Containers, Luke Yeager et al., NVIDIA
- Technical: REST API, Nathan Rini, SchedMD
- Technical: Job Container plugin for managing node local namespaces, Aditi Gaur, NERSC
- Technical: VMs and containers for a Slurm-based development cluster, François Daikhaté, CEA
- Technical: High Throughput Computing, Broderick Gardner, SchedMD
- Site Report: Slurm on Sherlock, Kilian Cavalotti, Stanford Research Computing Center
- Technical: Slurm + GCP, Brian Christiansen (SchedMD) and Keith Binder (Google)
- Site Report: ORNL, Matt Ezell, ORNL
- Technical: Monitoring Slurm with a Splunk App, Nicole Dobson, LANL
- Site Report: NERSC, Chris Samuel, NERSC
- Tutorial: Troubleshooting, Albert Gil and Jason Booth, SchedMD
- Technical: Slurm Account Synchronization with UNIX Groups and Users, Ole Nielsen, Technical University of Denmark (DTU)
- Technical: A fully configurable HPC web portal for managing Slurm jobs, Patrice Calegari, Atos
- Technical: Slurm 19.05, Tim Wickberg, SchedMD
- Technical: Slurm 20.02 and Beyond, Tim Wickberg, SchedMD
- Technical: Field Notes From A MadMan, Tim Wickberg, SchedMD
Presentations from Slurm User Group Meeting, September 2018
- Tutorial: Slurm Overview, Felip Moll Marquès, SchedMD
- Technical: Workload Management Requirements for an Interactive Computing e-Infrastructure, Sadaf Alam (CSCS) and the ICEI team (BSC, CEA, CINECA, CSCS, Jülich)
- Technical: Slurm in a Container Only World — Are We Crazy?, Paul Peltz and Lowell Wofford (LANL)
- Technical: Kraken - A stateful approach to cluster management, Paul Peltz and Lowell Wofford (LANL)
- Technical: A Declarative Programming Style Job Submission Filter, Douglas Jacobsen, NERSC
- Technical: Generalized Hypercube (GHC) — A Topology Plugin, M. Clayer and A. Faure, Atos
- Technical: Keeping Accounts Consistent Across Clusters Using LDAP and YAML, Christian Clémençon, Ewan Roche, Ricardo Silva (EPFL)
- Technical: Real-Time Job Monitoring Using An Extended slurmctld Generic Plugin — Introducing the plugin architecture SPACE, Mike Arnhold, Ulf Markwardt, and Danny Rotscher (Dresden)
- Technical: Scheduling by Trackable Resource (cons_tres), Morris Jette and Dominik Bartkiewicz, SchedMD
- Technical: Slurm 18.08 Overview, Brian Christiansen, SchedMD
- Technical: Layout For Checkpoint Restart on Specialized Blades, Bill Brophy, Martin Perry, Doug Parisek, and Steve Mehlberg (Atos)
- Site Report: CEA Site Report, Regine Gaudin, CEA
- Site Report: Colliding High Energy Physics With HPC, Cloud, and Parallel Filesystems, Carolina Lindqvist, Pablo Llopis, and Nils Høimyr (CERN)
- Technical: Slurm Simulator Improvements and Evaluation, Marco D'Amico, Ana Jokanovic, Julita Corbalan (BSC)
- Site Report: CETA-CIEMAT Site Report, Alfonso Pardo, CETA-CIEMAT
- Site Report: Tuning Slurm the CSCS Way, Miguel Gila, CSCS
- Technical: Workload Scheduling and Power Management, Morris Jette and Alejandro Sanchez, SchedMD
- Site Report: LANL Site Report — One Year Post Migration, Joseph 'Joshi' Fullop, LANL
- Technical: Field Notes Mark 2: Random Musings From Under A New Hat, Tim Wickberg, SchedMD
Presentations from Slurm Booth and Birds of a Feather, SC17, November 2017
- Booth: Slurm Overview, Brian Christiansen, Marshall Garey, Isaac Hartung (SchedMD)
- Booth: Heterogeneous Job Support, Morris Jette, Tim Wickberg (SchedMD)
- Booth: From Moab to Slurm: 12 HPC Systems in 2 Months, Paul Peltz, Los Alamost National Laboratory
- Booth: PMIx Multi-Cluster Operations, Ralph H. Castain
- Booth: Federated Cluster Support, Brian Christiansen, SchedMD
- Booth: PMIx Plugin with UCX Support, Artem Polyakov, Mellanox
- BOF: Slurm Birds of a Feather, Tim Wickberg, SchedMD
Presentations from Slurm User Group Meeting, September 2017
- Keynote: Supernova Cosmology & Supercomputing, Alex Kim, Lawrence Berkeley National Laboratory
- Tutorial: Introduction to Slurm, Tim Wickberg, SchedMD
- Technical: SLURMFS — Resource Manager File System for Slurm, Steven Senator, Los Alamos National Laboratory
- Technical: Federated Cluster Support, Brian Christiansen and Danny Auble, SchedMD
- Technical: Utilizing Slurm and Passive Nagios Plugins for Scalable KNL Compute Node Monitoring, Tony Quan and Basil Lalli, NERSC/LBNL
- Technical: Field Notes From the Frontlines of Slurm Support, Tim Wickberg, SchedMD
- Technical: Towards Modular Supercomputing with Slurm Dorian Krause et al., JSC
- Technical: Heterogeneous Job Support, Morris Jette, SchedMD
- Technical: cli_filter — command line filtration, manipulation, and introspection of job submissions (PDF version), Douglas Jacobsen, NERSC
- Technical: Slurm — Some Slightly Unconventional Use Cases, Chris Hill (MIT, Rajul Kumar (Northeastern), Evan Weinberg and Naved Ansari (BU), Tim Donahue
- Technical: Managing Diversity in Complex Workloads in a Complex Environment, Nicholas Cardo, CSCS
- Technical: SELinux policy for Slurm, Gilles Wiber and Mathieu Blanc (CEA), M'hamed Bouaziz and Liana Bozga (Atos)
- Site Report: From Moab to Slurm: 12 HPC Systems in 2 Months, Peltz, Fullop, Jennings, Senator, Grunau (Los Alamost National Laboratory)
- Site Report: NERSC Site Report, James Botts and Douglas Jacobsen
- Technical: Slurm Roadmap - 17.11, 18.08, and Beyond, Danny Auble, Morris Jette, Tim Wickberg (SchedMD)
- Technical: New Statistics Using TRES, Bill Brophy, Martin Perry, Thomas Cadeau (Atos)
- Technical: Enabling web-based interactive notebooks on geographically distributed HPC resources, Alexandre Beche, EPFL
- Technical: Slurm Singularity Spank Plugin, Martin Perry, Steve Mehlberg, Thomas Cadeau (Atos)
- Site Report: A Slurm Odyssey: Slurm at Harvard FAS Research Computing, Paul Edmond
- Site Report: LLSC Adoption of Slurm for Managing Diverse Resources and Workloads, Chansup Byun et al., MIT Lincoln Laboratory
- Site Report: Cyfronet Site Report — Improving Slurm Usability and Monitoring, M Pawlik, J. Budzowski, L. Flis, P Lasoń, M. Magryś
- Technical: When you have a hammer, everything looks like a nail — Checkpoint/restart in Slurm Manuel Rodríguez-Pascual, J.A. Moríñigo, and Rafael Mayo-García, CIEMAT
Presentations from Slurm Booth and Birds of a Feather, SC16, November 2016
- Booth: Process Management Interface - Exascale (PMIx), Ralph H. Castain
- Booth: Bull Slurm Related Developments, Job Packs demo video, Yiannis Georgiou, Bull AtoS
- Booth: Transition Hangout (a.k.a. how we converted to Slurm), Ryan Cox (BYU), Bruce Pfaff (NASA)
- Booth: Expanding Serial Analysis with Slurm Arrays, Christopher Coffey, Northern Arizona University
- Booth: Intel HPC Orchestrator, Tom Krueger, Intel
- Booth: Slurm Overview, Moe Jette, SchedMD
- BOF: Slurm State of the Union; v16.05, v17.02 and Beyond, Tim Wickberg, SchedMD
Presentations from Slurm User Group Meeting, September 2016
- Keynote: Computer-aided drug design for novel anti-cancer agents, Dr. Zoe Cournia (Biomedical Research Foundation, Academy of Athens)
- Technical: Overview of Slurm Version 16.05, Danny Auble (SchedMD), Yiannis Georgiou (Bull)
- Technical: MCS (Multi-Category Security) Plugin, Aline Roy, CEA
- Technical: Slurm Burst Buffer Integration, David Paul, NERSC
- Technical: Slurm Configuration Impact on Benchmarking, José Moríñigo, Manuel Rodríguez-Pascual, and Rafael Mayo-García, CIEMAT
- Technical: Real-time monitoring Slurm jobs with InfluxDB, Carlos Fenoy García
- Technical: Optimising HPC resource allocation through monitoring, Alexandre Beche, EPFL
- Technical: Simunix, a large scale platform simulator, David Glesser and Adrien Faure, Bull AtoS
- Site Report: Swiss National Supercomputer Centre (CSCS), Nicholas Cardo
- Technical: Configure a Slurm cluster with Ansible, Johan Guldmyr, CSC
- Technical: Checkpoint/restart in Slurm: current status and new developments, Manuel Rodríguez-Pascual, J.A. Moríñigo, and Rafael Mayo-García, CIEMAT
- Technical: Intel Knights Landing (KNL), Morris Jette and Tim Wickberg, SchedMD
- Technical: Job Packs - A New Slurm Feature For Enhanced Support of Heterogeneuous Resources, Andry Razafinjatovo, Martin Perry, and Yiannis Georgiou (Bull AtoS), Matthieu Hautreux (CEA)
- Technical: Improving system utilization under strict power budget using the layouts, Dineshkumar Rajagopal, Yiannis Georgiou, and David Glesser, Bull AtoS
- Technical: High definition power and energy monitoring support, Thomas Cadeau and Yiannis Georgiou, Bull AtoS
- Technical: Federated Cluster Scheduling, Dominik Bartkiewicz and Brian Christiansen, SchedMD
- Technical: Slurm Roadmap - SchedMD, Danny Auble, SchedMD
- Technical: Slurm Roadmap - Bull, Yiannis Georgiou and Andry Razafinjatovo, Bull AtoS
- Site Report: Electricité de France(EDF), Cécile Yoshikawa
- Site Report: Leibniz-Rechenzentrum (LRZ), Juan Pancorbo Armada
- Site Report: NERSC Site Report - One Year Of Slurm, Douglas Jacobsen
- Site Report: Experience using Slurm on ARIS HPC System, Nikos Nikoloutsakos, GRNET
Presentations from Slurm Booth and Birds of a Feather, SC15, November 2015
- Booth: PMIx - Enabling Application-driven Execution at Exascale , Ralph H. Castain
- Booth: NASA NCCS Site Update, Bruce Pfaff, NASA
- Booth: Brigham Young University - Site Report, Ryan Cox, BYU
- Booth: Slurm Overview Brian Christiansen and Danny Auble, SchedMD
- Booth: Never Port Your Code Again - Docker functionality with Shifter using SLURM, Shane Canon, NERSC
- Booth: Slurm Burst Buffer Support, Tim Wickberg, SchedMD
- Booth: Slurm Overview and Elasticsearch Plugin Alejandro Sanchez, SchedMD
- Booth: All Things TRES, Brian Christiansen, SchedMD
- BOF: Slurm Version 15.08, Danny Auble, SchedMD
- BOF: Improving Backfilling by using Machine Learning to Predict Running Times in SLURM, David Glesser, Bull
Presentations from Slurm User Group Meeting, September 2015
- Keynote: 10-years of Computing and Atmospheric Research at NASA: 1 day per day, Bill Putnam, NASA
- Technical: Overview of Slurm Version 15.08, Morris Jette and Danny Auble (SchedMD), Yiannis Georgiou (Bull)
- Technical: Trackable Resources (TRES), Brian Christiansen and Danny Auble, SchedMD
- Technical: Message Aggregation, Danny Auble (SchedMD), Yiannis Georgiou and Martin Perry (Bull)
- Technical: Slurm Burst Buffer Support, Morris Jette (SchedMD), Tim Wickberg (GW)
- Technical: Partition QOS, Danny Auble, SchedMD
- Technical: Slurm Power Management Support, Morris Jette, SchedMD
- Technical: Slurm Layouts Framework, Matthieu Hautreux, CEA
- Technical: Power Adaptive Scheduling, Yiannis Georgiou and David Glesser (Bull), Matthieu Hautreux (CEA), Denis Trystram (LIG)
- Technical: Never Port Your Code Again - Docker functionality with Shifter using SLURM, Douglas Jacobsen, James Botts, and Shane Canon, NERSC
- Technical: Increasing cluster throughput with Slurm and rCUDA, Federico Silla, Technical University of Valencia Spain
- Technical: Running Virtual Machines in a Slurm Batch System, Ulf Markwardt, Technische Universität Dresden
- Technical: Supporting SR-IOV and IVSHMEM in MVAPICH2 on Slurm, Xiaoyi Lu, Jie Zhang, et. al., The Ohio State University
- Technical: Heterogeneous Resources and MPMD (aka Job Pack), Rod Schultz and Martin Perry (Atos), Matthieu Hautreaux (CEA), Yiannis Georgiou (Atos)
- Technical: Towards multi-objective resource selection, Dineshkumar Rajagopal, David Glesser, Yiannis Georgiou, BULL
- Technical: Enhancing Startup Performance of Parallel Applications with SLURM, Sourav Chakraborty, et. al., OSU / LLNL
- Technical: Adaptable Profile-Driven TestBed ("Apt"), Brian Haymore, The University of Utah
- Technical: Using and Modifying the BSC Slurm Workload Simulator, Stephen Trofinoff and Massimo Benini, CSCS
- Technical: Improving Job Scheduling by using Machine Learning, David Glesser, Yiannis Georgiou (BULL) and Denis Trystram (LIG)
- Technical: Federated Cluster Scheduling, Brian Christiansen and Danny Auble, SchedMD
- Technical: Native SLURM on the XC30, Doug Jacobsen, James Botts, NERSC
- Technical: Slurm Roadmap - Versions 16.05 and beyond, Morris Jette and Danny Auble (SchedMD), Yiannis Georgiou (Bull)
- Technical: Exascale Process Management Interface, Ralph Castain (Intel), Joshua Ladd (Mellanox), Artem Polyakov (Mellanox), David Bigagli (SchedMD), Gary Brown (Adaptive Computing)
- Site Report: Brigham Young University, Ryan Cox, BYU
- Site Report: University of South Florida, John DeSantis, USF
- Site Report: NASA Center for Climate Simulation, Bruce Pfaff, NASA
- Site Report: Jülich Supercomputing Centre, Dorian Krause, JSC
- Site Report: The George Washington University, Tim Wickberg, GW
Presentations from Slurm Birds of a Feather and the Slurm booth, SC14, November 2014
- Slurm Overview, Danny Auble and Brian Christiansen, SchedMD
- Slurm Version 14.11, Jacob Jenson, SchedMD
- Slurm Version 15.08 Roadmap, Jacob Jenson, SchedMD
- Slurm on Cray systems, David Wallace, Cray
- Fair Tree: Fairshare Algorithm for Slurm Ryan Cox and Levi Morrison (Brigham Young University)
- VLSCI Site Report, Chris Samuel (VLSCI)
Presentations from Slurm User Group Meeting, September 2014
- Group photo Paul Hsi (MIT Kavli Institute for Astrophysics and Space Research)
- Welcoming Address Colin McMurtrie (Swiss National Supercomputing Centre, CSCS)
- Overview of Slurm Versions 14.03 and 14.11 Jacob Jenson (SchedMD) and Yiannis Georgiou (Bull)
- Warewulf Node Health Check Jacqueline Scoggins and Michael Jennings (Lawrence Berkeley National Lab)
- Slurm Process Isolation Bill Brophy, Martin Perry and Yiannis Georgiou (Bull), Morris Jette (SchedMD), Matthieu Hautreux (CEA)
- Improving message forwarding logic in Slurm Rod Schultz, Martin Perry and Yiannis Georgiou (Bull), Matthieu Hautreux (CEA), Danny Auble and Morris Jette (SchedMD)
- Tuning Slurm Scheduling for Optimal Responsiveness and Utilization Morris Jette (SchedMD)
- Improving HPC applications scheduling with predictions based on automatically-collected historical data Carlos Fenoy García (Barcelona Supercomputing Centre)
- OStrich: Fair Scheduler for Burst Submissions of Parallel Job Krzysztof Rzadca (University of Warsaw) and Filip Skalski ((University of Warsaw / Google)
- Adaptive Resource and Job Management for limited power consumption Yiannis Georgiou and David Glesser (Bull), Matthieu Hautreux (CEA), Denis Trystram (University Grenoble-Alpes)
- Introducing Energy based fair-share scheduling Yiannis Georgiou and David Glesser (Bull), Krzysztof Rzadca (University of Warsaw), Denis Trystram (University Grenoble-Alpes)
- High Performance Data movement between Lustre and Enterprise storage systems Aamir Rashid (Terascala)
- Extending Slurm with Support for Remote GPU Virtualization Sergio Iserte, Adrián Castelló, Rafael Mayo, Enrique S. Quintana-Ortlí, Federico Silla, Jose Duato (Universitat Jaume and Universitat Politècnica de València)
- SLURM Migration Experience Jacqueline Scoggins (Lawrence Berkeley National Lab)
- Budget Checking Plugin for SLURM Huub Stoffers (SURF sara)
- Fair Tree: Fairshare Algorithm for Slurm Ryan Cox and Levi Morrison (Brigham Young University)
- Integrating Layouts Framework in Slurm Thomas Cadeau and Yiannis Georgiou (Bull), Matthieu Hautreux (CEA)
- Topology-Aware Resource Selectiont Emmanuel Jeannot, Guillaume Mercier, and Adèle Villiermet (Inria)
- Slurm Inter-Cluster Project (presentation), (paper) Stephen Trofinoff (CSCS)
- Slurm Native Workload Management on Cray Systems Morris Jette (SchedMD)
- Slurm on Cray Systems Jason Coverston (Cray)
- SLURM Roadmap Yiannis Georgiou (Bull), Morris Jette and Jacob Jenson (SchedMD)
- Private /tmp for each job using SPANK Magnus Jonsson (Umeå Universitet)
- ICM Warsaw University Site Report Dominik Bartkiewicz and Marcin Stolarek (ICM Warsaw University)
- iVEC Site Report Andrew Elwell (iVEC)
- CEA Site Report Matthieu Hautreux (CEA)
- Swiss National Supercomputing Centre site report Massimo Benini (Swiss National Supercomputing Centre, CSCS)
- Aalto University Site Report Janne Blomqvist, Ivan Degtyarenko and Mikko Hakala (Aalto University)
- The George Washington University site report Tim Wickberg (George Washington University)
Presentations from Slurm Birds of a Feather, SC13, November 2013
- Slurm Workload Manager Project Report, Morris Jette and Danny Auble, SchedMD
- Bull's Slurm Roadmapt, Eric Monchalin, Bull
- Native Slurm on Cray XC30, David Wallace, Cray
Presentations from Slurm User Group Meeting, September 2013
- Group photo
- Welcome: Welcome Morris Jette (SchedMD)
- Keynote: Future Outlook for Advanced Computing Dona Crawford (LLNL)
- Technical: Overview of Slurm version 2.6, Morris Jette and Danny Auble (SchedMD), Yiannis Georgiou (Bull)
- Tutorial: Energy Accounting and External Sensor Plugins, Yiannis Georgiou, Martin Perry, Thomas Cadeau (Bull), Danny Auble (SchedMD)
- Technical: Debugging Large Machines, Matthieu Hautreux (CEA)
- Technical: Creating easy to use HPC portals with NICE EnginFrame and Slurm, Alberto Falzone, Paolo Maggi (Nice Software)
- Tutorial: Usage of new profiling functionalities, Rod Schultz, Yiannis Georgiou (Bull) Danny Auble (SchedMD)
- Technical: Fault Tolerant Workload Management, David Bigagli, Morris Jette (SchedMD)
- Technical: Slurm Layouts Framework, Yiannis Georgiou (Bull) Matthieu Hautreux (CEA)
- Technical: License Management, Bill Brophy (Bull)
- Technical: Multi-Cluster Management, Juan Pancorbo Armada (IRZ)
- Technical: Depth Oblivious Hierarchical Fairshare Priority Factor, Francois Daikhate, Matthieu Hautreux (CEA)
- Technical: Refactoring ALPS, Dave Wallace (Cray)
- Site Report: CEA, Francois Diakhate, Francis Belot, Matthieu Hautreux (CEA)
- Site Report: George Washington University, Tim Wickberg (George Washington University)
- Site Report: Brigham Young University, Ryan Cox (BYU)
- Site Report: Technische Universitat Dresden, Dr. Ulf Markwardt (Technische Universität Dresden)
- Technical: Slurm Roadmap, Morris Jette, Danny Auble (SchedMD), Yiannis Georgiou (Bull)
Presentations from Slurm Birds of a Feather, SC12, November 2012
- Slurm Workload Manager Project Report, Morris Jette and Danny Auble, SchedMD
- Using Slurm for Data Aware Scheduling in the Cloud, Martijn de Vries, BrightComputing
- Slurm Roadmap, Eric Monchalin, Bull
- MapReduce Support in Slurm: Releasing the Elephant, Ralph H. Castain, Wangda Tan, Jimmy Cao and Michael Lv, Greenplum/EMC
- Slurm at Rensselaer, Tim Wickberg, Rensselaer Polytechnic Institute
Presentations from Slurm User Group Meeting, October 2012
- Group photo
- Keynote: The OmSs Programming Model and its links to resource managers, Jesus Labarta, BSC
- Slurm Status Report, Morris Jette and Danny Auble, SchedMD
- Site Report: BSC/RES, Alejandro Lucero and Carles Fenoy, BSC
- Site Report: CSCS, Stephen Trofinoff, CSCS
- Site Report: CEA, Matthieu Hautreux, CEA
- Site Report: CETA/CIEMAT, Alfonso Pardo Diaz, CIEMAT
- Porting Slurm to Bluegene/Q, Don Lipari, LLNL
- Tutorial: Slurm Database Use, Accounting and Limits, Danny Auble, SchedMD
- Tutorial: The Slurm Scheduler Design, Don Lipari, LLNL
- Tutorial: Cgroup Support on Slurm, Martin Perry and Yiannis Georgiou (Bull), Matthieu Hautreux (CEA)
- Tutorial: Kerberos and Slurm using Auks, Matthieu Hautreux, CEA
- Keynote: Challenges in Evaluating Parallel Job Schedulers, Dror Feitelson, Hebrew University
- Integration of Slurm with IBM's Parallel Environment, Morris Jette and Danny Auble, SchedMD
- Slurm Bank, Jimmy Tang and Paddy Doyle, Trinity College, Dublin
- Using Slurm for Data Aware Scheduling in the Cloud, Martijn de Vries, Bright Computing
- Enhancing Slurm with Energy Consumption Monitoring and Control Features, Yiannis Georgiou, Bull
- MapReduce Support in SLURM: Releasing the Elephant, Ralph H. Castain, et. al., Greenplum/EMC
- Using Slurm via Python, Mark Roberts (AWE) and Stephan Gorget (EDF)
- High Throughput Computing with Slurm, Morris Jette and Danny Auble, SchedMD
- Evaluating Scalability and Efficiency of the Resource and Job Management System on large HPC clusters, Yiannis Georgiou (Bull) and Matthieu Hautreux (CEA)
- Integer Programming Based Herogeneous CPU-GPU Clusters, Seren Soner, Bogazici University
- Job Resource Utilizaiton as a Metric for Clusters Comparision and Optimization, Joseph Emeras, INRIA/LIG
Presentations from the Sixth Linux Collaboration Summit, April 2012
- Resource Management with Linux Control Groups in HPC Clusters Yiannis Georgiou, Bull
Presentations from Slurm Birds Of a Feather, SC11, November 2011
- Slurm Version 2.3 and Beyond Morris Jette, SchedMD LLC
- Bull's Slurm Roadmap Eric Monchalin, Bull
- Cloud Bursting with Slurm and Bright Cluster Manager Martijn de Vries, Bright Computing
Presentations from Slurm User Group Meeting, September 2011
- Group photo
- Basic Configuration and Usage, Rod Schultz, Groupe Bull
- SLURM: Advanced Usage, Rod Schultz, Groupe Bull
- CPU Management Allocation and Binding, Martin Perry, Groupe Bull
- Configuring Slurm for HA, David Egolf and Bill Brophy, Groupe Bull
- Slurm Resources isolation through cgroups, Yiannis Georgiou (Groupe Bull), Matthieu Hautreux (CEA)
- Slurm Operation on Cray XT and XE, Moe Jette, SchedMD LLC
- Challenges and Opportunities for Exascale Resource Management and How Today's Petascale Systems are Guiding the Way, William Kramer, NCSA
- CEA Site report, Matthieu Hautreux, CEA
- LLNL Site Report, Don Lipari, LLNL
- Slurm Version 2.3 and Beyond, Moe Jette, SchedMD LLC
- Slurm Simulator, Alejandro Lucero, BSC
- Proposed Design for Enhanced Enterprise-wide Scheduling, Don Lipari, LLNL
- Bright Cluster Manager & SLURM Slurm, Robert Stober, Bright Computing
- Job Step Management in User Space, Moe Jette, SchedMD LLC
- Slurm Operation IBM BlueGene/Q, Danny Auble, SchedMD LLC
Presentations from Slurm Birds Of a Feather, SC10, November 2010
- Slurm Version 2.2: Features and Release Plans, Morris Jette, Danny Auble and Donald Lipari, Lawrence Livermore National Laboratory
Presentations from Slurm User Group Meeting, October 2010
- Group photo
- Slurm: Resource Management from the Simple to the Sophisticated, Morris Jette and Danny Auble, Lawrence Livermore National Laboratory
- Slurm at CEA, Matthieu Hautreux, CEA/DAM/DIF
- Slurm Support for Linux Control Groups, Martin Perry, Bull Information Systems
- Slurm at BSC, Carles Fenoy and Alejandro Lucero, Barcelona Supercomputing Center
- Porting Slurm to the Cray XT and XE, Neil Stringfellow and Gerrit Renker, Swiss National Supercomputer Centre
- Real Scale Experimentations of Slurm Resource and Job Management System, Yiannis Georgiou, Bull Information Systems
- Slurm Version 2.2: Features and Release Plans, Morris Jette and Danny Auble, Lawrence Livermore National Laboratory
Presentations from Slurm Birds of a Feather, SC09, November 2009
- Slurm Community Meeting, Morris Jette, Danny Auble and Don Lipari, Lawrence Livermore National Laboratory
Presentations from Slurm Birds of a Feather, SC08, November 2008
- High Scalability Resource Management with SLURM, Morris Jette, Lawrence Livermore National Laboratory
- Slurm Status Report, Morris Jette and Danny Auble, Lawrence Livermore National Laboratory
Other Presentations
- Slurm Version 1.3, Morris Jette and Danny Auble, Lawrence Livermore National Laboratory (May 2008)
- Managing Clusters with Moab and Slurm, Morris Jette and Donald Lipari, Lawrence Livermore National Laboratory (May 2008)
- Resource Management at LLNL, Slurm Version 1.2, Morris Jette, Danny Auble and Chris Morrone, Lawrence Livermore National Laboratory (April 2007)
- Resource Management Using Slurm, Morris Jette, Lawrence Livermore National Laboratory (Tutorial, The 7th International Conference on Linux Clusters, May 2006)
Publications
- Energy Accounting and Control with Slurm Resource and Job Management System, Yiannis Georgiou, et. al. (ICDCN 2014, January 2014)
- Evaluating scalability and efficiency of the Resource and Job Management System on large HPC Clusters, Yiannis Georgiou (BULL S.A.S, France); Matthieu Hautreux (CEA-DAM, France) (16th Workshop on Job Scheduling Strategies for Parallel Processing, May 2012)
- GreenSlot: Scheduling Energy Consumption in Green Datacenters, Inigo Goiri, et. al. (SuperComputing 2011, November 2011)
- Contributions for Resource and Job Management in High Performance Computing, Yiannis Georgiou, Universite Joseph Fourier (Thesis, December 2010)
- Caos NSA and Perceus: All-in-one Cluster Software Stack, Jeffrey B. Layton, Linux Magazine, 5 February 2009.
- Enhancing an Open Source Resource Manager with Multi-Core/Multi-threaded Support, S. M. Balle and D. Palermo, Job Scheduling Strategies for Parallel Processing, 2007.
- Slurm: Simple Linux Utility for Resource Management [PDF], M. Jette and M. Grondona, Proceedings of ClusterWorld Conference and Expo, San Jose, California, June 2003.
- Slurm: Simple Linux Utility for Resource Management, A. Yoo, M. Jette, and M. Grondona, Job Scheduling Strategies for Parallel Processing, volume 2862 of Lecture Notes in Computer Science, pages 44-60, Springer-Verlag, 2003.
Interview
RCE 10: Slurm (podcast): Brock Palen and Jeff Squyres speak with Morris Jette and Danny Auble of LLNL about Slurm.
Other Resources
Learning Chef: Compute Cluter with Slurm A Slurm Cookbook by Adam DeConinck
Last modified 15 October 2019