Page 33 - GCN, Jan/Feb 2016
P. 33

case study   SUPERCOMPUTING
Bringing super power
to the desktop
The Pittsburgh Supercomputing Center’s Bridges project gives seamless desktop access to high-performance computing
BY STEPHANIE KANOWITZ
A partnership between the Pittsburgh Supercomput- ing Center and the technol- ogy industry is making the processing power of high-performance computing available to both traditional and non-
traditional HPC users.
The center teamed with Hewlett
Packard Enterprise and Intel for the Bridges project, a National Science Foundation-funded program that gives approved users seamless desktop ac- cess to HPC resources via a portal.
“The name ‘Bridges’ stems from three computational needs the system will fill for the research community,” said Nick Nystrom, the center’s direc- tor of strategic applications and prin- cipal investigator on the project, when it was first announced. “Foremost, Bridges will bring supercomputing to nontraditional users and research communities. Second, its data-in- tensive architecture will allow high- performance computing to be applied effectively to big data. Third, it will bridge supercomputing to university campuses to ease access and provide burst capability.”
Bridges users will upload their data and submit the jobs to the HPC resources they’ve selected, Nystrom said. They don’t have to log in or un- derstand File Transfer Protocol, for example. Portal managers will handle granting access to users and allocating resources.
Bill Mannel, vice president and gen- eral manager of HPC and big data at Hewlett Packard Enterprise, said Bridges consists of three types of the company’s machines:
• Four Integrity Superdome X servers, which let users lock data once into their 12 terabytes of shared memory and then conduct analyses. The pro- cess concentrates memory in one place rather than spreading across many nodes.
• 42 ProLiant DL580 servers, each of which has 3 terabytes of shared mem- ory and provides virtualization and re- mote visualization.
• 800 Apollo 2000 nodes, each with
128 gigabytes of shared memory to support capacity workloads.
“All tied together, you’ve got the number crunchers, which are the Apol- lo 2000s, the data analytics engines in the Superdome, and then you have the 580s that provide the direct access to the whole system from all the users’ workstations or laptops, basically giv- ing them access to the supercomputing resources of the center itself,” Mannel said.
The system’s composition reflects the workloads Nystrom said he expects Bridges to handle — particularly those involving big data.
“What that lets us do is to converge
“The point is to have each compute node have multiple paths to storage to avoid congestion and also to give people the maximum performance at the minimum cost.”
— NICK NYSTROM, PITTSBURGH SUPERCOMPUTING CENTER
GCN JANUARY/FEBRUARY 2016 • GCN.COM 29
SHUTTERSTOCK


































































































   31   32   33   34   35