UD Information Technologies (IT) has formally announced that Sakai will be retired on Dec. 22, 2018. (Read this UDaily article.) UD currently supports both Sakai and Canvas, but has launched the Transition to Canvas (T2C) initiative to move the University to one system.
By the end of the fall 2018 semester, the University plans to move all Sakai courses and projects to Canvas or other platforms. (Learn about the reasons the University is moving to Canvas.)
IT Academic Technology Services (IT-ATS) has been preparing faculty and staff for T2C since August’s Faculty Commons Keep Calm and Teach On (KCTO) workshop series.
IT-ATS has developed a comprehensive T2C website to help faculty and staff during the transition. The website includes instructions for moving content from Sakai to Canvas and self-service resources for learning how to use Canvas.
The T2C website also includes information about hands-on training sessions for faculty and staff. In addition, faculty can request one-on-one consultations or departments can request workshops for their faculty using the T2C consultation request form.
Faculty and staff currently using Sakai should have received email with a link to a questionnaire about migrating their courses and projects to Canvas or other platforms. IT-ATS and other Faculty Commons partners will use survey responses to design training and to gauge faculty interest in an automated course migration tool.
For more information, visit the T2C website.
Over the last year, IT staff have been working with faculty, research staff, and vendors to design the next high-performance computing (HPC) community cluster. Penguin Computing's Tundra Extreme Scale (ES) design met all of the University’s design goals and was priced to maximize the value of our investment in this critical research resource. (See the Design and Policies document for more details on the University’s HPC goals.)
This successor to the Mills and Farber HPC clusters will pack more computing power into less physical space, use power more efficiently, and leverage reusable infrastructure for a longer overall lifespan. Several leading HPC vendors provided supporting information in late 2016 which, together with input from University faculty and staff, led to a finalized design proposal that became a formal request for proposal (RFP).
The project reached a major milestone with the announcement at the September HPC Symposium of Penguin Computing as the vendor for the next community cluster.
Penguin’s Tundra ES design follows the specifications of the Open Compute Project, an initiative to standardize the construction of compute hardware and the racks that hold that hardware.
The next community cluster will use Intel processors, as the current Farber cluster does. The first generation of nodes will feature two 18 core Intel Xeon E5-2695 v4 processors, nearly double the number of cores than in Farber's compute nodes. Even though all component hardware in Penguin’s Tundra ES design is significantly more advanced than Farber's components, the entry level buy-in for stakeholder investment in the next community cluster is anticipated to be similar, as are the value-added benefits: opportunistic access to idle compute resources, use of the large-scale shared scratch storage (Lustre), an allocation of resilient shared storage (NFS), and support provided by UD IT HPC staff.