NHPC cluster consist of one Frontnode, often called as headnode or management node. It has a two Intel Xeon processor, E5649 type (2.53 GHz), 48 GB memory and two HP NC382i gigabit Network interface. In addition to two Gigabit network inteface, it has one 10Gigabits interface which connects directly to RHnet 10Gig router port. It has also infiniband interface connected to NHPC storage system (X9320 with IBRIX Fusion Software)
Two front-end nodes, also called login noded, are configured in the NHPC cluster. Both login nodes are configured in DNS so that users do not have to remember the name of the login node to connect. They are pointed to gardar.nhpc.hi.is. Therefore if one goes down, users are still able to connect and perform their task with the other. Both login nodes have identical hardware, HP Proliant DL360 G7, Two Intel Xeon Processor E5649 2.53GHz and 48 GB of memory. It has two Gigabit network interface and each has one 10Gigabits network interface that connects directly to RHnet 10Gigabit router port. Login nodes are also Infiniband interface connected to NHPC storage system ((X9320 with IBRIX)
The NHPC cluster has a total of 288 compute nodes which are spread in 18 HP - c7000 enclosure. Each HP enclosure has one 1GBE swtich, One Infiniband QDR switch, 40Gbps, one on-board 100MbE port, six power supply and ten cooling fans. Each enclosure has 16 blades for computing. Each compute node consist of Two Intel Xeon Processor E5649 (2.53 GHx, 6 cores), 24GB memory, One HP NC362i gigabit network interface and one HP IB dual port 4X QDR CX-2 ÐCI-e 40Gbps infiniband interface. There is a local storage of 250GB SATA disk drive.
The NHPC cluster provides a separate storage system. The storage system consistes of the X9320 Network Storage System, that uses the IBRIX fusion software. It has a total usable capacity of 71.6TByte and an average sequential bandwidth betwwen read and write of more than 2GBytes/sec. For more about NHPC storage configuration please check the link Storage section.