This is an old revision of the document!


Rack Operators

ExoGENI racks' hardware is supplied by IBM and typically consists of 11 x3650M4 servers (one configured to be a head node, others as workers) with 6TB expandable ISCSI storage and two switches - an 8052 1G/10G management switch and an 8264 10G/40G OpenFlow-enabled dataplane switch. We intentionally selected 2U servers for improved expandability (to maintain our ability to install custom hardware, like NetFPGA10G, GPGPUs or experimental NICs). Compatible configurations from Dell and Cisco exist as well.

The software is a combination of open-source cloud software (OpenStack and xCAT) augmented with ExoGENI-specific functionality, with GENI federation and orchestration provided by Orca and FOAM. Both are configured specifically for the ExoGENI environment. ExoGENI Operations team hosts a software repository with RPMs for all the needed packages. Base OS installation on ExoGENI racks is CentOS 6.[23].

Management of ExoGENI racks is performed remotely on all racks by the ExoGENI Operations team at RENCI and Duke using a combination of scripts, available from ExoGENI Subversion and tools like Puppet.

Monitoring of ExoGENI racks is performed using a hierarchical Check_MK/Nagios deployment, which allows site operators and ExoGENI operations team to monitor the racks and also supplies data to GMOC via a series of software adapters.

Administrator accounts on racks are managed through a hierarchical LDAP deployment rooted at RENCI. Site administrators have rights to their rack, while ExoGENI Ops team has admin rights on all racks. Experimenter authorization to rack resources is via certificates issued by GENI federation with an additional white-list filter. Experimenters do not have login access to the basic physical resources (head node, OpenStack worker nodes, switches), instead they are authorized to access provisioned slivers (this does include provisioned bare-metal nodes). The rack design includes several layers of security to isolate experimenters from critical rack components.

Typical rack requirements are:

  • Power/Space/Cooling (see hardware and power sections for more details)
  • A /25 of publicly routable IPv4 addresses (discontinuous address segments are acceptable as well, total addresses numbering approx. 120) to support a Layer 3 connection to the campus network (1G). Physically there are 3 connections
    1. 10/100/1000BASE-T to Juniper SSG5 VPN appliance (each rack connects back to RENCI over a secure VPN).
      • A static public IPv4 address is assigned to SSG5.
    2. 10/100/1000BASE-T to the head node (redundant connection in case the G8052 malfunctions or misconfigured)
      • A static public IPv4 address is assigned to the head node.
    3. Pluggable optics connection to G8052 (primary Layer 3 connection into campus)
      • The rest of the IP addresses in the /25 are assigned dynamically to experimenter provisioned VMs and baremetal nodes within the rack.
  • A 1/10/40G Layer 2 connection to NLR FrameNet or Internet 2 AL2S either directly or through an intermediate Layer 2 provider. This connects the rack to GENI Mesoscale OpenFlow environment as well as traditional VLAN-based services offered by NLR and/or I2.
  • Ability to provide emergency contacts and occasional remote eyes and hands
  • For GPO-sponsored racks:
    • The rack is delivered pre-assembled at the factory with software already pre-configured and installed.
    • For the duration of the project (through 2014) RENCI remains the primary operator of the equipment and UNC Chapel Hill owns it. From then on transfer of equipment and operational responsibilities will be negotiated.
Navigation
Print/export