ExoGENI Wiki

Rack Operators


ExoGENI racks' hardware for the majority of the current generation racks is supplied by IBM and typically consists of 11 x3650M4 servers (one configured to be a head node, others as workers) with 6TB expandable ISCSI storage and two switches - an 8052 1G/10G management switch and an 8264 10G/40G OpenFlow-enabled dataplane switch. We intentionally selected 2U servers for improved expandability (to maintain our ability to install custom hardware, like NetFPGA10G, GPGPUs or experimental NICs). Compatible configurations from Dell, Cisco and Ciena exist as well.

The software is a combination of open-source cloud software (OpenStack and xCAT) augmented with ExoGENI-specific functionality, with GENI federation and orchestration provided by Orca and FOAM. Both are configured specifically for the ExoGENI environment. ExoGENI Operations team hosts a software repository with RPMs for all the needed packages. Base OS installation on ExoGENI racks is CentOS 6.[23].

Management of ExoGENI racks is performed remotely on all racks by the ExoGENI Operations team at RENCI and Duke using a combination of scripts, available from ExoGENI Subversion and tools like Puppet.

Monitoring of ExoGENI racks is performed using a hierarchical Check_MK/Nagios deployment, which allows site operators and ExoGENI operations team to monitor the racks and also supplies data to GMOC via a series of software adapters.

Authentication & Authorization

Administrator accounts on racks are managed through a hierarchical LDAP deployment rooted at RENCI. Site administrators have rights to their rack, while ExoGENI Ops team has admin rights on all racks.

  • Site admins should contact exogeni-ops@renci.org to request LDAP credentials.
    • (Only RENCI can add accounts to the central LDAP master.)
  • Admin privileges are granted via sudo.
    • Sudo access is dependent upon LDAP group membership
  • Site admins will be able to SSH to their rack head node, and no other ExoGENI site's rack. (LDAP group-based authorization)
    • Once logged into the head node, you may use sudo to become root. root may ssh passwordlessly (using keys) to any worker node.
  • Site admins will be able to SSH to their rack's networking switches, and their LDAP credentials will be validated by a Radius server.
  • Regarding LDAP passwords:
    • Currently, your initial password must be generated by RENCI and sent to you. Afterward, there is a password change form located at https://control.exogeni.net/password. It will require you to use your temporary password just to sign in.
    • If you've completely lost your password, you will need to request a reset from RENCI (exogeni-ops@renci.org), as we currently do not have a secure, unattended password reset mechanism.
    • If you are logged into a host via SSH, you should be able to use the normal command line 'passwd' command to manipulate your LDAP password.

Experimenter authorization to rack resources is via certificates issued by GENI federation with an additional white-list filter.

  • Experimenters do not have login access to the basic physical resources (head node, OpenStack worker nodes, switches), instead they are authorized to access provisioned slivers (this does include provisioned bare-metal nodes).
  • The rack design includes several layers of security to isolate experimenters from critical rack components.

Site Requirements

Typical rack requirements are:

  • Power/Space/Cooling (see hardware and power sections for more details)
  • We strongly prefer a /24 block of publicly routable IPv4 addresses to support Layer 3 connections to the campus network (1G). If that is simply not possible, we can make do with a /25 block. Discontinuous address segments are acceptable as well. Physically there are 3 connections
    1. 10/100/1000BASE-T to Juniper SSG5 VPN appliance (each rack connects back to RENCI over a secure VPN).
      • A static public IPv4 address is assigned to SSG5.
    2. 10/100/1000BASE-T to the head node (redundant connection in case the G8052 malfunctions or misconfigured)
      • A static public IPv4 address is assigned to the head node.
    3. Pluggable optics connection OR 1000BASE-T to G8052 (primary Layer 3 connection into campus)
      • The rest of the publicly routable IP addresses are assigned dynamically to experimenter provisioned VMs and baremetal nodes within the rack.
  • A 1/10/40G Layer 2 connection to Internet 2 AL2S or ION or ESnet either directly or through an intermediate Layer 2 provider. This connects the rack to GENI Mesoscale OpenFlow environment as well as traditional VLAN-based services offered by I2 and ESnet.
    • Three VLAN ranges must be negotiated
      • A pool of VLANs for ExoGENI native stitching (qty 20) - negotiated with the help from ExoGENI team
      • A pool of VLANs for GENI stitching (qty TBD) - negotiated with the help from GPO
      • A pool of VLANs for connecting to Mesoscale OpenFlow deployments - negotiated with the help from GPO
  • Ability to provide emergency contacts and occasional remote eyes and hands
  • For GPO-sponsored racks:
    • Rack hardware is built by IBM
    • The rack is delivered pre-assembled at the factory with software already pre-configured and installed.
    • For the duration of the project (through 2014) RENCI remains the primary operator of the equipment and UNC Chapel Hill owns it. From then on transfer of equipment and operational responsibilities will be negotiated.
  • For anyone wanting to purchase their own rack
    • Configurations from IBM, Dell and Cisco are available.

xCAT Stateless Image Customizations

Shared VLAN Usage

QR Code
QR Code public:operators:start (generated for current page)