Mills HPC cluster retiring in 2018
The purchase and installation of the next HPC community cluster in the University's data center provides motivation to move to the next stage in the Mills end-of-life plan. The Mills cluster, purchased in 2012, exited its official support period at the beginning of 2017. The cluster has remained functional in a production capacity since that time, with 13 nodes (about 6%) removed from service following irreparable failure.
Later this year, in preparation for the installation of the next community cluster, a number of hardware racks will be removed from the Mills cluster. Mills will remain functional in a diminished capacity for a period thereafter. Sometime in 2018, the remainder of Mills will be consolidated to a single rack and the login node will be rebuilt to function solely as a means for users to access the Lustre and home directory storage of that cluster for a limited time.
The September 2017 HPC Symposium session (Sept. 27) included two presentations, one by IT Research Computing staff and one by Jeff Frey, IT Network and System Services (IT-NSS), and Zubaer Hossain, assistant professor of Mechanical Engineering, and his research group.
First, IT Research Computing staff announced the plans to use Penguin Computing’s Tundra Extreme Scale (ES) design for the next HPC community cluster. They also reminded attendees about upcoming HPC workshops and the advanced training available by request.
Second, Frey opened the presentation about Hossain’s research by discussing the specific software build and workflow process he developed for Hossain’s research group. Hossain’s presentation, “Quantum-Continuum Design of Ultratough Nanocomposites,” addressed the role of computer modeling in designing materials, specifically, nanoscale mechanisms and mechanical properties for designing lightweight nanocomposites, thin-film photovoltaics, and van der Waals heterostructures. The group also described how they are using Rclone to sync files and directories to and from various cloud services. (Rclone is now available on Farber.)
This presentation was one of a series of meetings designed for researchers using or interested in using the University’s HPC clusters. Those interested in presenting at a future HPC Symposium should email firstname.lastname@example.org.
- Community coffee hours: This fall, IT-CS&S will continue hosting coffee hours for the UD Graphic Information Systems (GIS) community. The next GIS coffee hour will be Tuesday, Oct. 10 at 10:00 a.m. in Faculty Commons, 116 Pearson Hall. The technical presentation will be given by Geri Miller (from Esri) on “New ArcGIS apps: ArcGIS Pro and Insights for ArcGIS.” Following this presentation there will be an open discussion about current issues in GIS. For more information, email Olena Smith, IT-CS&S.
- GIS Day: IT-CS&S and the University Library will host GIS Day on Thursday, Nov. 16, in the Perkins Student Center Gallery. More details will be posted on the UD GIS website.
Call for research: posters and links
The IT Research Computing team encourages Mills and Farber stakeholders to submit their research to the Research Computing website gallery. Contributions to this gallery allow the Research Computing group to highlight the importance of funding future University HPC clusters. These research publications show, in part, how much research has been accomplished using UD’s HPC community clusters and the clusters’ vital importance to the UD research community. You can submit links to your papers, images, or poster sessions using the online Research Submission Form or submit other file types to Anita Schwartz (email@example.com).
Training offered for University researchers
The Research Computing team hosted the UNIX Basics series over the summer and the XSEDE MPI Workshop in early October.
- The Unix Basics Series covered several topics including Unix/Linux for beginners, the vi text editor, and getting started with the Farber community cluster. Over 50 people attended the four sessions and received hands-on assistance. Materials used during these sessions can be found at the HPC wiki.
- The two-day XSEDE MPI workshop, held October 3-4, 2017, gave C and Fortran programmers a hands-on introduction to MPI programming. Attendees left with a working knowledge of how to write scalable codes using MPI, the standard programming tool for scalable parallel computing.
- Information on November and December XSEDE workshops is available at the Research Computing website.