Original author(s) | Gregory Kurtzer (gmk), et al. |
---|---|
Developer(s) | Community Gregory Kurtzer |
Stable release | 3.8.7
/ 17 March 2022[1] |
Repository | |
Written in | Go[2] |
Operating system | Linux |
Type | Operating-system-level virtualization |
License | 3-clause BSD License[3] |
Website | apptainer |
Singularity is a free and open-source computer program that performs operating-system-level virtualization also known as containerization.[4]
One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world.[5]
The need for reproducibility requires the ability to use containers to move applications from system to system.[6]
Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.[7]
In 2021 the Singularity open source project split into two projects called Apptainer and SingularityCE.
History
Singularity began as an open-source project in 2015, when a team of researchers at Lawrence Berkeley National Laboratory, led by Gregory Kurtzer, developed the initial version written in the C programming language and released it[8] under the BSD license.[9]
By the end of 2016, many developers from different research facilities joined forces with the team at Lawrence Berkeley National Laboratory to further the development of Singularity.[10]
Singularity quickly attracted the attention of computing-heavy scientific institutions worldwide:[11]
- Stanford University Research Computing Center deployed Singularity on their XStream[12][13] and Sherlock[14] clusters
- National Institutes of Health installed Singularity on Biowulf,[15] their 95,000+ core/30 PB Linux cluster[16]
- Various sites of the Open Science Grid Consortium including Fermilab started adopting Singularity;[17] by April 2017, Singularity was deployed on 60% of the Open Science Grid network.[18]
For two years in a row, in 2016 and 2017, Singularity was recognized by HPCwire editors as "One of five new technologies to watch".[19][20] In 2017 Singularity also won the first place for the category "Best HPC Programming Tool or Technology".[20]
As of 2018, based on the data entered on a voluntary basis in a public registry, Singularity user base was estimated to be greater than 25,000 installations[21] and included users at academic institutions such as Ohio State University and Michigan State University, as well as top HPC centers like Texas Advanced Computing Center, San Diego Supercomputer Center, and Oak Ridge National Laboratory.
In February 2018 the Sylabs[22] company, founded by the Singularity author, was announced [23] to provide commercial support for Singularity. In October of that year Sylabs released version 3.0.0[24] which was a rewrite in the Go programming language.
Apptainer / Singularity split
In May 2020 Gregory Kurtzer left Sylabs but retained leadership of the Singularity open source project.[25] In May 2021 Sylabs made a fork of the project[26] and called it SingularityCE.[27][28] In November 2021 the Singularity open source project joined the Linux Foundation[29] and was renamed to Apptainer.[30]
Features
Singularity is able to support natively high-performance interconnects, such as InfiniBand[31] and Intel Omni-Path Architecture (OPA).[32]
Similar to the support for InfiniBand and Intel OPA devices, Singularity can support any PCIe-attached device within the compute node, such as graphic accelerators.[33]
Singularity also has native support for Open MPI library by utilizing a hybrid MPI container approach where OpenMPI exists both inside and outside the container.[31]
These features make Singularity increasingly useful in areas such as machine learning, deep learning and most data-intensive workloads where the applications benefit from the high bandwidth and low latency characteristics of these technologies.[34]
Integration
HPC systems traditionally already have resource management and job scheduling systems in place, so the container runtime environments must be integrated into the existing system resource manager.
Using other enterprise container solutions like Docker in HPC systems would require modifications to the software.[35] Docker containers can be automatically converted to stand-alone singularity files which can then be submitted to HPC resource managers.[36]
Singularity seamlessly integrates with many resource managers[37] including:
- HTCondor[38]
- Oracle Grid Engine (SGE)
- SLURM (Simple Linux Utility for Resource Management)
- TORQUE (Terascale Open-source Resource and QUEue Manager)
- PBS Pro (PBS Professional)
- HashiCorp Nomad (A simple and flexible workload orchestrator)
- IBM Platform LSF
See also
References
- ↑ "Releases · apptainer/singularity". github.com. Retrieved 29 June 2022.
- ↑ "Singularity+GoLang". GitHub. Retrieved 3 December 2021.
- ↑ "Singularity License". Apptainer.org. Singularity Team. Retrieved 3 December 2021.
- ↑ "Singularity presentation at FOSDEM 17". archive.fosdem.org.
- ↑ Kurtzer, Gregory M.; Sochat, Vanessa; Bauer, Michael W. (2017). "Singularity: Scientific Containers for Mobility of Compute". PLOS ONE. 12 (5): e0177459. Bibcode:2017PLoSO..1277459K. doi:10.1371/journal.pone.0177459. PMC 5426675. PMID 28494014.
- ↑ "Singularity, a container for HPC". admin-magazine.com. 24 April 2016.
- ↑ "Singularity Manual: Mobility of Compute". Singularity User Guide - Version 2.5.2.
- ↑ "Sylabs Brings Singularity Containers into Commercial HPC". top500.org.
- ↑ "Singularity License". singularity.lbl.gov. Singularity Team. 19 March 2018. Retrieved 19 March 2018.
- ↑ "Changes to the AUTHORS.md file in Singularity source code made in April 2017". GitHub.
- ↑ "Berkeley Lab's Open-Source Spinoff Serves Science". 7 June 2017.
- ↑ "XStream online user manual, section on Singularity". xstream.stanford.edu.
- ↑ "XStream cluster overview". Archived from the original on 24 October 2020. Retrieved 10 April 2018.
- ↑ "Sherlock: What's New, Containers and Deep Learning Tools". Stanford Research Computing Center.
- ↑ "NIH HPC online user manual, section on Singularity". hpc.nih.gov.
- ↑ "NIH HPC Systems". hpc.nih.gov.
- ↑ "Singularity on the OSG".
- ↑ "Singularity in CMS: Over a million containers served" (PDF).
- ↑ "HPCwire Reveals Winners of the 2016 Readers' and Editors' Choice Awards at SC16 Conference in Salt Lake City". HPCwire.
- 1 2 "HPCwire Reveals Winners of the 2017 Readers' and Editors' Choice Awards at SC17 Conference in Denver". HPCwire.
- ↑ "Voluntary registry of Singularity installations".
- ↑ "Sylabs home page". Retrieved 29 June 2022.
- ↑ "Sylabs Emerges from Stealth to Bring Singularity Container Technology to Enterprise Performance Computing" (Press release). 8 February 2018. Retrieved 29 June 2022.
- ↑ "Singularity 3.0.0". GitHub.
- ↑ "Singularity repository move and company updates". Retrieved 29 June 2022.
- ↑ "Sylabs fork of Singularity". Retrieved 29 June 2022.
- ↑ "SingularityCE". Retrieved 30 June 2022.
- ↑ "SingularityCE". 28 October 2022 – via GitHub.
- ↑ "Singularity has joined the Linux Foundation!". Retrieved 29 June 2022.
- ↑ "Apptainer website". Retrieved 15 February 2023.
- 1 2 "Intel Advanced Tutorial: HPC Containers & Singularity – Advanced Tutorial – Intel" (PDF).
- ↑ "Intel Application Note: Building Containers for Intel Omni-Path Fabrics using Docker and Singularity" (PDF).
- ↑ "Singularity Manual: A GPU example".
- ↑ Tallent, Nathan R.; Gawande, Nitin; Siegel, Charles; Vishnu, Abhinav; Hoisie, Adolfy (2018). "Evaluating On-Node GPU Interconnects for Deep Learning Workloads". High Performance Computing Systems. Performance Modeling, Benchmarking, and Simulation. Lecture Notes in Computer Science. Vol. 10724. pp. 3–21. doi:10.1007/978-3-319-72971-8_1. ISBN 978-3-319-72970-1. S2CID 1674152.
- ↑ Jonathan Sparks, Cray Inc. (2017). "HPC Containers in use" (PDF).
- ↑ "Singularity and Docker". Retrieved 3 December 2021.
- ↑ "Support on existing traditional HPC".
- ↑ "HTCondor Stable Release Manual : Singularity Support". Archived from the original on 4 February 2020. Retrieved 4 February 2020.
Further reading
- Proceedings of the 10th International Conference on Utility and Cloud Computing: Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?
- Singularity prepares version 3.0, nears 1 million containers served daily
- Dell HPC: Containerizing HPC Applications with Singularity
- Intel HPC Developer Conference 2017: Introduction to High-Performance Computing HPC Containers and Singularity
- HPCwire Reveals Winners of the 2017 Readers’ and Editors’ Choice Awards at SC17 Conference in Denver: Singularity awarded for Best HPC Programming Tool or Technology category