This is an old revision of the document!


ExoGENI Head Node

Example Hardware Specs

Vendor Model Revision CPU(s) CPU Speed CPU Cores CPU Count RAM HDD 1GB NICs
IBM x3650 M3 7945-AC1 Intel Xeon E5620 2.40GHz 4 2 8 x 2GB DDR3 PC3-10600 1333MHz ECC > 500 GB 8

Network Configuration

Interfaces

Bond Slave Interfaces VLAN(s) Tagging Purpose
bond0 eth0 (from public); eth4 (from Port 47 on BNT8052) 1010 untagged Public
bond1 eth1, eth5 1009 untagged iSCSI
bond2 eth2,eth3,eth6,eth7 1006,1007,1008 tagged Mgt, OpenStack, xCAT

A Word on Public Interface Connection

In an ideal scenario, the head node has public access failover capability, via a bonded active/active or active/passive interface. In that scenario, the provider has two cables providing public-facing connectivity. One runs to the head node, and the other runs to the SSG5. For example:

  • Go to eth0 on head node
  • Goes to 0/0 on SSG5
    • Go to port 48 on mgt switch (BNT8052)
    • Go from port 47 on mgt switch back to eth4 on head node

The SSG5 has an IPSEC tunnel back to an SSG5 tunnel at RENCI. If the cable running to eth0 loses network connectivity, then hopefully the head node would still be able to speak to RENCI via the SSG5 link.

Example RHEL/CentOS ifcfg-device scripts

#/etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE="bond0"
BOOTPROTO="none"
ONBOOT="yes"
NETWORK="X.X.X.0"
NETMASK="255.255.255.0"
BROADCAST="X.X.X.255"
IPADDR="X.X.X.Y"
BONDING_OPTS="mode=balance-alb miimon=25"

#/etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE="bond1"
BOOTPROTO="none"
ONBOOT="yes"
NETWORK="10.102.0.0"
NETMASK="255.255.255.0"
BROADCAST="10.102.0.255"
IPADDR="10.102.0.1"
BONDING_OPTS="mode=802.3ad miimon=25 xmit_hash_policy=layer3+4 lacp_rate=1"
Navigation
Print/export