Research Computing Task Force Report

Peter Monk, Task Force Chair, Sept. 2010

The Task Force met three times on March 18, April 12 and May 13. Initially we discussed items of general concern to the members of the task force, before concentrating more on cluster computing. The following seven items emerged as major areas of concern or potential areas for action:


The task force feels that this is the most pressing item. The University is seriously understaffed in research computing support. The staff currently providing dedicated support are severely overloaded and yet are constantly asked to take on more responsibilities. The University's research potential is compromised and may even be put in jeopardy by this staffing crisis. Further study is needed whereby we identify what specific needs we have and how we can meet them.

Physical Infrastructure

Chapel St. Computing Center is currently near capacity (HVAC, power, and to some extent, space). The accelerating need to grow the University's computing capacity makes a sustainable plan for more physical infrastructure the University's second most pressing need.

One possibility to explore is creative use of the former Chrysler site. We did not have time for substantive discussions on proposals for this site, but a parallel supercomputing facility would be a good use of the location's acreage and its access to power and high-bandwidth network connectivity.

Training and Support

There was a strong sentiment that better training and support is needed on campus. In particular all clusters run some version of UNIX, and this operating system is often not familiar to incoming graduate students. Task Force members mentioned specific training needs in basic UNIX/Linux skills, shell scripting, Perl, Python, parallel programming techniques, specialized statistical topics and database programming. A broader set of UD faculty should be surveyed to determine training needs and priority and additional staffing requirements.

Some initial investigation focused on online training and UD classes. To help with Linux, Dean Nairn of IT Client Support & Services has proposed a sequence of workshops that IT-CS&S could offer during Winter term (outline attached). IT-CS&S has also begun compiling a list of free- and commercial on-line training resources (attached).

Some academic courses in Computer & Information Sciences are also relevant, but would require additional teaching staff to accommodate the increased demand. More work needs to be done to define this need and recommend further action.


Managing and purchasing clusters are key concerns for many faculty on campus as we have already stated. Early on we resolved to inventory the clusters on campus. Dick Sacher of IT Client Support & Services, with the aid of departmental Information Technology professionals, undertook this survey (see attached ClusterInventory.xlsx). It reveals

  • There are over 50 clusters on campus.
  • Most are quite small (median core count is 32 with 48Gb memory).
  • Most are running Linux (CentOS dominates).
  • From 2005-2009 we have purchased on average almost 8 clusters a year (maximum 11 in 2009).

While there always will be a place for specialist clusters this suggests that many users might see improved functionality (i.e., more cores and memory) and less support headaches by combining purchases into a smaller number of larger units.

To this end with the extensive help of Dick Sacher we invited John Campbell, AVP of the Rosen Center for Advanced Computing at Purdue University, to conduct a seminar via videoconferencing on their Community Cluster purchase program. This allows researchers to contribute to hardware for a centrally managed cluster in return for very clear access to their hardware, and to the machine as a whole (of course much larger than any one group could afford). He is willing to give another seminar in the fall and introduce faculty from Purdue to discuss their experiences with the program. The Community Cluster program deserves further examination.

The cluster survey reveals another interesting point: some clusters are quite old (the oldest dating to 1999). Some older clusters are probably used for teaching, but to help the University with its goal of a green campus, an orderly shutdown of older clusters would help.This can be easily implemented as part of an agreed Community Cluster program.


To a greater or lesser extent we are all users of commercial software products from statistical packages, to engineering and scientific simulation packages and general development environments. Efforts should be made to coordinate software license purchases so that users can leverage bulk purchases for cost efficiency, and existing licenses need to be well advertised. Funding resources need to be identified more effectively.

An initial summary of software to target comes from the cluster survey: compilers (e.g., Intel, Portland Group), Gaussian, Molpro, WIEN2k, CHARMM, GAMESS, EMBOSS, NCBI Blast, molden, ADF, Dacapo, Fluent, Gambit, NMRPipe, Rosetta, Simpson, and MATLAB. Other potential software might include Mathematica and various statistical software packages, such as SAS, SPSS, STATA, HLM, LISREL, BILOG-MG, PARSCALE, Mplus, WINSTEPS and the qualitative/ethnographic research software NVIVO.

Experimental Servers/Cloud Computing

With the increased use of cloud computing and virtualization, some faculty felt that it is desirable to have a campus computing service offering on-demand virtual Linux and Windows machines. This is particularly important for the College of Business & Economics because cloud computing is a common solution for many service applications.


TeraGrid offers NSF awardees vast resources for parallel computing. But projects often need to undergo pilot testing on local systems and show promise before being allowed to use TeraGrid resources. A Community Cluster approach could make such a machine available to a wider range of faculty who could not otherwise afford a 500+ processor machine.

Training in the use of TeraGrid resources is needed and could be a combination of internal efforts coupled with information (perhaps web-based) from TeraGrid staff. We need a "Campus Champion" TeraGrid's terminology) to promote TeraGrid and coordinate training.

Spring 2010 Research Computing Task Force





Mark Barteau



Dominic Di Toro

Civil & Environmental Engineer


Doug Doren

Arts & Sciences


Jeff Frey

IT-NSS & Col of Engrg


Jeffrey Heinz

Linguistics & Cognitive Science


Xiaoming Li

Electrical and Computer Engrg


Peter Monk

Mathematical Sciences


Ratna Nandakumar

School of Education


Sandeep Patel

Chemistry & Biochemistry


Dick Sacher

IT Client Support & Services


Stephen Siegel

Computer & Info Sciences


Michael Shay

Physics & Astronomy


Michela Taufer

Computer & Info Sciences


Dion Vlachos

Chemical Engineering


Harry Wang

B&E Accounting & MIS


Cathy Wu

Computer & Info Sciences


Xiao-Hai Yan

Col Earth Ocean & Environment


Karl Steiner



Martin Swany

Computer & Info Sciences