Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:operators:start [2014/02/20 12:27]
jonmills [Site Requirements]
public:operators:start [2017/01/30 22:48] (current)
mcevik
Line 3: Line 3:
  
 ==== Summary ==== ==== Summary ====
-ExoGENI racks' [[:public:hardware:start | hardware]] for the majority of the current generation racks is supplied by IBM and typically consists of 11 x3650M4 servers (one configured to be a head node, others as workers) with 6TB expandable ISCSI storage and two switches - an 8052 1G/10G management switch and an 8264 10G/40G OpenFlow-enabled dataplane switch. We intentionally selected 2U servers for improved expandability (to maintain our ability to install custom hardware, like NetFPGA10G, GPGPUs or experimental NICs). Compatible configurations from Dell and Cisco exist as well.+ExoGENI racks' [[:public:hardware:start | hardware]] for the majority of the current generation racks is supplied by IBM and typically consists of 11 x3650M4 servers (one configured to be a head node, others as workers) with 6TB expandable ISCSI storage and two switches - an 8052 1G/10G management switch and an 8264 10G/40G OpenFlow-enabled dataplane switch. We intentionally selected 2U servers for improved expandability (to maintain our ability to install custom hardware, like NetFPGA10G, GPGPUs or experimental NICs). Compatible configurations from **[[:public:hardware:start | DellCisco and Ciena]]** exist as well.
  
 The [[:public:software:start | software]] is a combination of open-source cloud software (OpenStack and xCAT) [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=CloudBling | augmented with ExoGENI-specific functionality]], with GENI federation and orchestration provided by [[https://geni-orca.renci.org | Orca]] and [[https://openflow.stanford.edu/display/FOAM  | FOAM]]. Both are [[:private:configuration:start | configured specifically for the ExoGENI environment]]. ExoGENI Operations team hosts a [[http://software.exogeni.net/repo/exogeni/6/current/ | software repository]] with RPMs for all the needed packages. Base OS installation on ExoGENI racks is CentOS 6.[23]. The [[:public:software:start | software]] is a combination of open-source cloud software (OpenStack and xCAT) [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=CloudBling | augmented with ExoGENI-specific functionality]], with GENI federation and orchestration provided by [[https://geni-orca.renci.org | Orca]] and [[https://openflow.stanford.edu/display/FOAM  | FOAM]]. Both are [[:private:configuration:start | configured specifically for the ExoGENI environment]]. ExoGENI Operations team hosts a [[http://software.exogeni.net/repo/exogeni/6/current/ | software repository]] with RPMs for all the needed packages. Base OS installation on ExoGENI racks is CentOS 6.[23].
Line 33: Line 33:
 Typical rack requirements are: Typical rack requirements are:
   * Power/Space/Cooling (see [[:public:hardware:start | hardware]] and [[:public:hardware:power | power]] sections for more details)   * Power/Space/Cooling (see [[:public:hardware:start | hardware]] and [[:public:hardware:power | power]] sections for more details)
-  * We strongly prefer a /24 block of publicly routable IPv4 addresses to support Layer 3 connections to the campus network (1G). If that is simply not possible, we can make do with a /25 block.  Discontinuous address segments are acceptable as well.  Physically there are 3 connections+  * We strongly prefer a /24 block of publicly routable IPv4 addresses to support **Layer 3 connections to the campus network** (1G). If that is simply not possible, we can make do with a /25 block.  Discontinuous address segments are acceptable as well.  Physically there are 3 connections
     -  10/100/1000BASE-T to Juniper SSG5 VPN appliance (each rack connects back to RENCI over a secure VPN).      -  10/100/1000BASE-T to Juniper SSG5 VPN appliance (each rack connects back to RENCI over a secure VPN). 
       * A static public IPv4 address is assigned to SSG5.       * A static public IPv4 address is assigned to SSG5.
Line 40: Line 40:
     - [[:public:hardware:network:start | Pluggable optics connection]] OR 1000BASE-T to G8052 (primary Layer 3 connection into campus)     - [[:public:hardware:network:start | Pluggable optics connection]] OR 1000BASE-T to G8052 (primary Layer 3 connection into campus)
       * The rest of the publicly routable IP addresses are assigned dynamically to experimenter provisioned VMs and baremetal nodes within the rack.       * The rest of the publicly routable IP addresses are assigned dynamically to experimenter provisioned VMs and baremetal nodes within the rack.
-  * A 1/10/40G Layer 2 connection to NLR FrameNet or Internet 2 AL2S either directly or through an intermediate Layer 2 provider. This connects the rack to GENI Mesoscale OpenFlow environment as well as traditional VLAN-based services offered by NLR and/or I2.+  * A 1/10/40G **Layer 2 connection to Internet 2 AL2S or ION or ESnet** either directly or through an intermediate Layer 2 provider. This connects the rack to GENI Mesoscale OpenFlow environment as well as traditional VLAN-based services offered by I2 and ESnet.
     * [[:public:hardware:network:start | Pluggable optics connection]] to G8264     * [[:public:hardware:network:start | Pluggable optics connection]] to G8264
 +    * Three VLAN ranges must be negotiated 
 +      * A pool of VLANs for ExoGENI native stitching (qty 20) - negotiated with the help from ExoGENI team
 +      * A pool of VLANs for GENI stitching (qty TBD) - negotiated with the help from GPO
 +      * A pool of VLANs for connecting to Mesoscale OpenFlow deployments - negotiated with the help from GPO
   * Ability to provide emergency contacts and occasional remote eyes and hands   * Ability to provide emergency contacts and occasional remote eyes and hands
   * For GPO-sponsored racks:   * For GPO-sponsored racks:
Line 49: Line 53:
   * For anyone wanting to purchase their own rack   * For anyone wanting to purchase their own rack
     * Configurations from IBM, Dell and Cisco are available.     * Configurations from IBM, Dell and Cisco are available.
 +
 +==== xCAT Stateless Image Customizations ====
 +
 +[[:public:operators:xcat_-_stateless_image_updates  | Update/customize xCAT netboot images for baremetal servers]]
 +
 +==== Shared VLAN Usage ====
 +
 +[[:public:operators:shared_vlan_usage  | Connect VMs to the shared VLANs ]]
Navigation
Print/export