NHPC cluster usages X9320 Network Strorage Systems, and use the IBRIX Fusion application. The storage system is managed by HP Storage Works X9300 Management Server and provides a single access point system configuration and control, which includes monitoring health status, performance utilization, file system expansion of storage capacity. It has two X9320 (made of four file service nodes) for a total usable capacity of 71.6TByte with an average sequential bandwidth between read and write of more than 2GByte/sec.
4x HP ProLiant DL380 G6 (2x file server Pairs)
Two Quad-core Intel Processor E5520 (2.26 GHz)
48 GB (12x4GB) Memory
2x146GB SAS disk Dirves
4xembedded Gigabit Ethernet
1xIB 4x QDR 40Gbps Adapter (PCI-e Gen2)
2x SAS HBA 6 Bb/s
144x 600GB 15k SAS disks (usable Capacity : 71.6TByte, Raw Capacity: 86.5TB)
2 x P2000 G3 Modular Smart Array Systems (each with redundant Controllers)
12 2RU disk enclouses, 12 drives each (144 drve bays in totoal)
144 x 600GB SAS 15k disk dirves
NHPC cluster's computer nodes, front-end nodes and file servers are connected by a high bandwidth/low latency Infiniband fabric, via HP Infiniband 4X QDR IB 40Gbps PCI-express Gen2 host Dual channel adapters. The configuration uses the Mellanox/Voltaire Infiniband 4X QDR (Quad-Data Rate) technology. The compute nodes are connected by groups of 16 to one 32-ports IB 4X QDR leaf switch housed into each c7000 enclouser, and each leaf switch is connected to a spine switch Mellanox/Voltaire Grid Director 4036 via 8 QDR links.