Researchers in engineering; physical, natural, social, policy and decision sciences; and financial modeling need high-performance computing to perform complex calculations and manipulate very large datasets. In April 2011, the UD Research Computing Task Force recommended that UD create a large, broadly available, high performance computing (HPC) cluster.Summer 2011, Information Technologies (IT) is building such a cluster, designed to be a resource for the entire UD research community. IT provides the infrastructure and consolidates the purchasing to save all researchers money. Individual researchers buy only the computing power they need without the ongoing financial liability of running their own computing clusters. This cost-effective, collaborative cluster model is used successfully at research universities like Dartmouth, Indiana, Purdue, UW-Milwaukee, and Virginia.
Our community cluster is a distributed-memory system composed of commodity components. (See current configuration plan.) What are the benefits of joining the community cluster?
- Choose your level of participation: Buy as many nodes as your research computing needs dictate. Our standard UD HPC compute nodes cost you around $3K/node. Or, you may choose configurations with more memory and local disk. All nodes have a common chipset and vendor. This homogeneity results in very deep discounts and also reduces what you need to learn to work on the cluster.
- Cap your expenses: Your up-front expense includes maintenance for the 5-year life of the cluster. IT funds the infrastructure including expert staff to maintain and assist researchers with cluster use.
- Get more than you pay for: You own the nodes you purchase, and always have preemptive priority use. When other community partners’ nodes are idle, you may use them until they are needed by the owner. A batch queueing system governs this opportunistic use of idle nodes and enforces fair use policies among owner groups. Therefore, researchers owning more nodes will have more CPU cycles available to them.
- Free your group from system administration: Focus entirely on your research activities rather than system administration and hardware maintenance. IT cluster and network administration specialists are responsible for the 24/7/365 maintenance of the hardware, operating system, basic software needs, file backups, and network security. This cluster infrastructure is part of IT’s financial contribution.
- Free yourself from the cluster’s physical needs: UD’s central data center provides floor space, rack-mounted cluster hosting, high-performance networking and network security, AC, power, fire suppression systems and backup systems, thereby sparing you from these significant expenses.
- Enhance grant proposals: Funding agencies take note of an institution’s commitment to sustain HPC resources for research; the availability of relevant computational resources immediately after your research grant is awarded; your use of an institutional, energy-efficient solution; and IT’s infrastructure cost-matching contribution. Program officers will recognize and reward your focus on the research, not on physical renovation and computer administration.
- Exercise flexible group membership: Add and delete people to your research group membership at will including non-UD collaborators. All members of a group have equal privileges and draw on their group’s resource quotas.
- Work in a consistently managed software environment: The initial programming environment includes commercial and open-source compilers, scientific subroutine libraries, common open-source application software and some campus-wide- licensed software. Some commercial software may be restricted to specific research groups. Modules help you manage your programming environments. This reduces the complexity of the system for you and follows the pattern of TeraGrid and national HPC centers.
- Enjoy improved network access: Ongoing network improvements to the data center, to key on-campus buildings and to Internet2-accessible institutions and data resources enhance the central cluster’s value and access for multi-disciplinary and multi-site research consortia.
- Participate in governance: Consider serving on a faculty advisory committee for the HPC cluster. Meetings will be scheduled semi-annually and will focus on reviewing potential issues or avenues for improvement, highlight key successes, and discussing future plans for the growth of the cluster.