Leading-edge AI Computing System now during Home with Brookhaven Lab’s Computational Science Initiative

The Computational Science Initiative (CSI) during a U.S. Department of Energy’s Brookhaven National Laboratory now hosts one of a newest computing systems directed during enhancing a speed and scale for conducting different systematic research: a NVIDIA® DGX-2™ Artificial Intelligence supercomputer.

NVIDIA’s latest low training high-performance computing system, a DGX-2, now is partial of Brookhaven’s Computational Science Initiative. Photo pleasantness of NVIDIA.

Designed to “take on a world’s many formidable synthetic comprehension challenges,” a NVIDIA DGX-2 during Brookhaven is one of a initial accessible worldwide. At a Lab, a NVIDIA DGX-2, nicknamed “Minerva,” will offer as a user-accessible multipurpose appurtenance focused on mechanism scholarship research, appurtenance learning, and data-intensive workloads.

According to Adolfy Hoisie, who leads Brookhaven’s Computing for National Security Department, carrying a NVIDIA DGX-2’s discriminate power, that includes a 2-petaflops graphics estimate section (GPU) accelerator done probable by a scalable design built on a NVIDIA NVSwitch™ AI network fabric, will means opportunities for different investigate pursuits with impact opposite a laboratory.

In a area of systems design research, Hoisie expects that a NVIDIA DGX-2 will yield insights in evaluating a performance, power, and trustworthiness of state-of-the-art computing technologies for several workloads.

Because a NVIDIA DGX-2 privately was designed to tackle a largest information sets and many computationally complete and formidable models, it also will play an critical purpose in a Lab’s appurtenance training efforts. One such customer will be a ExaLearn collaboration, an Exascale Computing Project co-design core featuring 8 DOE inhabitant laboratories and led by CSI’s Deputy Director, Francis J. Alexander. The ExaLearn group essentially is building appurtenance training program for exascale applications.

The NVIDIA DGX-2 also will be intent as partial of CSI’s ongoing management, development, and find compared with a investigate and interpretation of high-volume, high-velocity extrinsic systematic data.

“We will display a NVIDIA DGX-2 to data-intensive workloads for many programs, such as those of import to DOE scholarship programs during a Lab’s Office of Science User Facilities—including a Relativistic Heavy Ion Collider, National Synchrotron Light Source II, and Center for Functional Nanomaterials—and to Department of Defense (DoD) data-intensive workloads of interest,” Hoisie explained. “Given poignant bandwidth in and out of a system, we can pursue information analyses in mixed paradigms, for example, streaming information or quick entrance to immeasurable amounts of information from Brookhaven Lab’s large systematic databases. Such improvements will means extensive strides in information analyses within a Lab’s core high appetite physics, chief physics, biological, atmospheric, and appetite systems scholarship areas and cryogenic technologies, as good as for specific investigate areas in computing sciences of seductiveness to DOE and DoD.”

CSI’s DGX-2 also will be a apparatus for NVIDIA as partial of a collaboration. As investigate involving a complement advances, a capability in impacting applications, speed to solutions, or even markers of a possess altogether opening will be common between Brookhaven Lab and NVIDIA developers.

DGX-2 is a newest serve to NVIDIA’s portfolio of AI supercomputers, that began with a DGX-1, introduced in 2016. The DGX-2 brings new innovations to AI, including a formation of 16 entirely companion NVIDIA Tesla® Tensor Core V100 graphics estimate units with 512 gigabytes of GPU memory.

“We built a NVIDIA DGX-2 to solve a world’s many formidable AI challenges, so we’re gay that Brookhaven National Laboratory will put a innovations to use to serve real-world science,” pronounced Charlie Boyle, comparison executive of DGX Systems during NVIDIA. “The Lab’s researchers will be means to daub into a system’s 16 NVIDIA Tesla V100 Tensor Core GPUs—delivering dual petaflops of computational performance—to assistance residence opportunities of inhabitant importance.”

Source: BNL


Comment this news or article