TeamHPC Implements Dual-Core AMD Cluster with InfiniPath
TeamHPC, a division of M&A Technology, Inc., is implementing the world's first AMD dual-core Opteron cluster integrated with the breakthrough technology of the PathScale InfiniPath HTX InfiniBand Adapter, the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications.
The cluster was purchased by the Center for Computational Science and Engineering (CSE) at the University of California, Davis. TeamHPC is delivering a 144-CPU AMD Opteron processor-based Linux cluster that leverages the PathScale InfiniPath interconnect to run computational models and simulations related to physics, discrete mathematics, engineering, biomedical diagnostics, and other processor-intensive HPC applications. The deployment consists of 36 server nodes each equipped with two dual-core AMD Opteron processors and a PathScale InfiniPath Adapter. They are interconnected a Cisco TopSpin 270 InfiniBand switch.
"TeamHPC has the ability to grasp, understand, test and integrate the newest and most innovative high performance supercomputing technologies," said Bret Stouder, Vice President of TeamHPC. "We enable every customer to remotely access and test their clusters prior to shipment because we believe this is an important step in the process of acquiring a high performance Linux cluster. UC Davis is the latest example of the world-class scientific research institutions who recognize the unique value that TeamHPC brings to the HPC industry."
TeamHPC proposed the PathScale InfiniPath HTX InfiniBand Adapters because of their ultra-low latency, highest effective bandwidth and unprecedented messaging rate. These attributes greatly improve MPI application performance and Linux cluster utilization. The highly pipelined, cut-thru design of InfiniPath is optimized for applications sensitive to communication latency. PathScale InfiniPath delivers superior Interconnect performance at commodity price levels by connecting directly to the AMD Opteron via an open standard HyperTransport HTX slot, and by using standard InfiniBand switching to scale to hundreds or thousands of nodes.
"We support scientists and academic researchers working to analyze and visualize highly complex physical and biological processes," said Bill Broadley, an Information Architect at UC Davis. "We require our compute resources to facilitate the best possible performance for our many communications-intensive applications. The PathScale InfiniPath Adapter is performing exceptionally thus far."
Performance results achieved on well-known HPC application benchmarks and in real HPC installations such as the new UC Davis cluster prove that the PathScale InfiniPath Adapter is the world's highest performance cluster interconnect. PathScale's innovative approach to high-speed InfiniBand interconnect reduces the workload required to process messages, enabling a dramatically higher message rate, and ultimately increasing the effective bandwidth. This enables users to solve their most challenging computational problems in the minimum period of time.
"TeamHPC and PathScale have collaborated to deliver a high performance research platform to UC Davis that enables scientists and academic researchers to overcome the performance bottlenecks of computing systems of the past," said Len Rosenthal, VP of Marketing at PathScale. "The combined performance of AMD Opteron processors and the low-latency PathScale InfiniPath interconnect along with complete testing and integration solutions from TeamHPC opens a new chapter in high performance computing, where an economically priced system does not mean compromised performance."
New performance results have recently been published for InfiniPath including the full Pallas Benchmark Suite and the full HPC Challenge Benchmarks. These latest results validate the performance advantages of PathScale InfiniPath as the highest performance commodity cluster interconnect for Linux-based HPC applications. These results can be viewed at:
TeamHPC, a division of M&A Technology, (http://www.teamhpc.com) specializes in High Performance Computing, and assembles and integrates all of its products in an ISO-9000: 2000 certified manufacturing plant. TeamHPC provides the unique opportunity of granting researchers access to their computational resources for benchmark and application testing before products are shipped. In carving new paths in the HPC market, TeamHPC also provides a 24-hour data center environment that allows researchers to host their computational machines at M&A Technology's headquarters in Carrollton, TX.
1040 OCL Parkway
Eudora, KS 66025