ACES Workshop

Dates and Location

Apply Dates: July 18, 2024 - July 20, 2024 (Thursday – Saturday)
Location: OMNI Providence Hotel, Rhode Island (map)
Application Deadline: June 11, 2024

Travel Support is Available!

Overview

Please join Texas A&M High Performance Research Computing (HPRC) in Providence for a pre-conference workshop ahead of the Practice & Experience in Advanced Research Computing (PEARC24) conference. You’ll learn how the ACES (Accelerating Computing for Emerging Sciences) testbed complements the National Science Foundation’s portfolio of advanced cyberinfrastructure (CI) resources and services that are supported through the U.S. taxpayer investment. Participants will meet kindred community members and future collaborators from across the country.

ACES features a menu of accelerators in a hardware composable infrastructure designed to excel with artificial intelligence/machine learning (AI/ML) tasks with an Open OnDemand interface that commands a robust and growing software ecosystem. The HPRC advanced user support team will conduct tutorials and present PEARC24 papers that chronicle how ACES tackles data-intensive AI/ML tasks with greater speed, precision and efficiency. Participants from all domains are welcome to deliver lightning talks about their own ACES experience, and share how it has influenced their plans for the future.

Applications from professional staff and research faculty who work at minority-serving institutions of higher learning, demographics that are underrepresented in STEM academics and careers (Science, Technology, Engineering, and Mathematics), and research domains that are new to advanced CI are especially encouraged to apply. All must provide a brief description of their current research, future plans for ACES engagement and a biosketch (current NSF format).

Registration waivers and travel support are available for a limited number of U.S.-based applicants. Non-academic industry or government affiliates will be charged a $350 registration fee (to cover meals). If you have questions, or if you’d like to present your research, please contact: events@hprc.tamu.edu.

Agenda

July 18, 2024 Thursday Evening: Reception
July 19, 2024 Friday: Workshops and Tutorials
July 20, 2024 Saturday: Morning Workshops and Tutorials

    Tutorial Sessions (Preliminary)

    • NVIDIA H100
    • Intel OneAPI for Sapphire Rapids CPUs and Intel Max (formerly Ponte Vecchio) GPUs
    • Graphcore Intelligence Processing Units (IPUs)
    • Containers

    Here’s what early-adopters have to say about ACES!

    Image of Ruisi Cai
    Ruisi Cai (UT-Austin) uses ACES to process long context sequences in Large Language Models (LLMs). “Due to transformers’ quadratic memory requirements, LLMs command substantial computational power and agile memory management,” said Cai. The UT-Austin team developed a unique approach that was highlighted in a paper titled, “Learning to Compress Long Contexts by Dropping-In Convolutions.” Their paper was accepted by the International Conference on Machine Learning (ICML24).

    Image of Aocheng Li
    Aocheng Li (Purdue) uses ACES for data-driven archaeological site reconstruction. They said, “I love its elegant and light-weight web interface for file manipulation and job creation/submission. Using the composability features, I combine virtual network computing and TensorBoard servers to launch jobs and monitor training output with just a few clicks - all within one browser session. The HPRC staff are extremely helpful, and are quick to solve my issues and concerns. Using ACES has been an enjoyable experience.”

    Image of Freddie Witherden
    Freddie Witherden (Texas A&M Department of Ocean Engineering) used ACES to perform high-order accurate fluid flow calculations of bluff bodies. “The range of hardware, including CPUs, NVIDIA GPUs, and Intel GPUs, is perfect for the development, testing, and evaluation of performance-portable coding paradigms. Additionally, the large-memory nodes have proved invaluable for enabling us to perform preprocessing work for simulations on leadership-class computing resources.”

    Image of Rubem Mondaini
    Rubem Mondaini (University of Houston) uses ACES to study quantum many-body problems in Condensed Matter Physics with the goal of understanding how Coulomb repulsion between electrons can affect quantum matter topology. "ACES’ abundant supply of the latest CPUs (Sapphire Rapids), large memory and fast interconnect make it possible to reach physical system sizes unforeseen until now," said Dr. Mondaini. “This unique combination of assets makes all the difference with investigations in the quantum world,” he added.

    Image of Chen-Chun Chen
    Chen-Chun Chen (Ohio State University NOWLAB) primarily uses the Intel GPUs and XeLink nodes on ACES. “Using TensorFlow and Horovod, I’ve been running OSU Micro Benchmarks (OMB) to extend the MVAPICH library to support Intel PVC GPUs,” he said, and added, “I receive invaluable assistance from the HPRC helpdesk, and my experiments on ACES have been consistently smooth.”

    Image of Junyuan Hong
    Junyuan Hong (UT-Austin) cited ACES in his latest research which presents a new method for private prompt tuning of LLMs, like ChatGPT. The solution is called Differentially-Private Offsite Prompt Tuning (DP-OPT) which employs a discrete client-side prompt that can be applied to desired cloud models without significantly compromising performance.

    Image of Wonmuk Hwang
    Wonmuk Hwang (Texas A&M Department of Biomedical Engineering) performs molecular dynamics simulations of biomolecules - a task best performed with state-of-the-art computational resources. Dr. Hwang uses ACES to investigate the mechanical response of T-cell receptors which defend against pathogens like influenza and the SARS CoV-2 virus that was responsible for the COVID pandemic. “The NVIDIA H100s are great for carrying out multiple simulations, and the HPRC staff are always helpful when troubleshooting aspects of this novel testbed,” he said.

    Image of Hanning Chen
    Hanning Chen (Texas Advanced Computing Center) used ACES to conduct a Molecular Dynamics (MD) simulation of Satellite Tobacco Mosaic Virus with more than 28 million atoms. “MD simulations of large biological systems are significant because they reveal functions contributed by millions of atoms, or more,” he said, and added, “Our benchmark test with NAMD3 and a 64-node run revealed a performance of 4.8 ns/day, with an impressive 80 percent scaling factor when we increased the number of nodes from 1 to 64. ACES is a powerful tool for MD simulations, and the HPRC support team’s knowledge of this novel platform helps researchers progress quicker.”

    Apply

    Acknowledgment

    Acknowledgment: The ACES team gratefully acknowledgment support from the National Science Foundation (NSF). The ACES project is supported by the Office of Advanced Cyberinfrastructure (OAC) award number 2112356. To learn more about ACES, please visit us at https://hprc.tamu.edu/aces/.

    Contact Information

    Phone: 979-845-0219
    Email: events@hprc.tamu.edu