ExoGENI Software

The testbed offers multiple levels of provisioning interfaces for user access and resource management, including standard cloud interfaces (EC2 and xCAT), OpenFlow, and layered GENI control and monitoring functions. One goal is flexible, automated deployment of customized software stacks on shared servers, with secure isolation and manageable quality of service. We expect the majority of users to rely on virtualization, as this offers higher degrees of freedom in the choice of the OS, kernel and filesystem. It also has a bare-metal imaging capability based on xCAT provisioning tool (open-source xCAT, developed and maintained by IBM) with a small number of vetted images.

Figure below shows the ExoGENI software stack. For compute element provisioning it uses xCAT (bare-metal instances) and OpenStack (virtualized instances). To support OpenFlow, an instance of FlowVisor is running on the head node such that both FOAM and ORCA can communicate with it to create slices. ORCA directly communicates with the Layer 2 switch to support VLAN-based topology creation (without OpenFlow).

The rack is capable of exposing multiple programmatic interfaces:

Resource monitoring

Nagios – an established, versatile open-source monitoring software suite is used as a low-level monitoring solution for operations staff (it can also be used to feed GENI Instrumentation and Measurement). A number of Nagios instances (one from each rack) are aggregated into a single view for operations staff, in order for them to monitor the health of individual resources and instantiated slivers (VMs) in each rack in an easy-to-understand fashion. This model permits RENCI staff (and potentially GPO staff and GMOC) to view the health of each rack, while on-site staff can view the health of just their rack.

Nagios collects information on most common performance metrics (CPU, memory, disk utilization, network traffic, temperature readings). The IBM x3650 M3/M4 server family has extensive probes for server health monitoring (including power consumption) which we will work to enable and expose via Nagios and to GENI users.

Monitoring information is also sent to UKY using the blowhole (repo is private, however code is public, available upon request) software which can subscribe to slice event notifications distributed by multiple ORCA controllers over XMPP and perform a variety of tasks based on its configuration.

GENI Resources

ExoGENI may be viewed as a group of independent resource providers within a larger GENI federation. A resource provider is represented by an ORCA AM (not to be confused with GENI AM; ORCA AMs currently do not expose GENI AM API, as ORCA internal interfaces rely on tickets – a feature currently in discussion for future GENI API). In order to achieve the functionality required for creating complex slices with resources from multiple providers, a coordinating function is needed, which in ORCA is fulfilled by a Broker actor. ORCA AMs delegate some portion of their resources to one or more brokers. The users interact with ORCA via the SM actor, which exposes the GENI AM API as well as ORCA’s native user-oriented XMLRPC interface. SMs receive tickets from brokers for the needed resources that they redeem with ORCA AMs to instantiate those resources.

An ORCA AM is a generic ORCA server configured with local policies and plug-in handler scripts to control the aggregate’s resources or invoke the underlying IaaS interfaces to create and manipulate slivers. The initial ExoGENI deployment includes four kinds of aggregates offering network services:

  • Cloud sites. A cloud site AM exposes a slivering service to instantiate virtual machines (VMs) on its hosts and virtual links (VLANs) over its internal network. An ORCA cloud AM includes a handler plugin to invoke an EC2-compatible IaaS cloud service such as Eucalyptus or OpenStack. The handler also invokes an extension to the cloud interface with a command set to instantiate interfaces on VMs when they are requested, stitch interfaces to adjacent virtual links, and configure interface properties such as a layer-3 address and netmask. This extension is known as “NEuca”: we first implemented it for Eucalyptus, but have since ported it to OpenStack. For bare metal provisioning we rely on xCAT.
  • Native ORCA-BEN circuit service. The AM for the Breakable Experimental Network (BEN) offers a multi-layer circuit service. For ExoGENI, it provides Ethernet pipes: point-to-point VLANs between pairs of named Ethernet interfaces in the BEN substrate. It uses a suite of ORCA plugins, including NDL-OWL queries to plan the paths from a substrate model. The handler scripts for BEN manage paths by forming and issuing commands to switch devices over the BEN management network.
  • External circuit services. For these services, the AM invokes a provider’s native provisioning APIs to request and manipulate circuits. The AM authenticates with its own identity as a customer of the provider. A circuit is a pipe between named Ethernet interfaces on the provider’s network. We have developed ORCA plugins for NLR’s Sherpa FrameNet service, Internet2 ION, and the OSCARS circuit reservation service used in ESNet.
  • Static tunnel providers. A provider can pre-instantiate a static pool of tunnels through its network, and expose them as VLANs at its network edge. The AM runs a simple plugin that manages an exclusive assignment of VLANs to slices, given a concrete pool of legal VLAN tags that name the prearranged static tunnels. This technique has proven to be useful for tunneling through campus networks and regional networks that do not offer dynamic circuit service.

Each virtual link instantiated from these aggregates appears as an atomic link (an Ethernet pipe or segment) in the slice’s virtual topology. At the layer below, the aggregate may perform internal stitching operations to construct a requested virtual pipe or segment from multiple stitched links traversing multiple substrate components within the aggregate’s domain. A virtual link may even traverse multiple providers if the host aggregate represents a multi-domain circuit service such as OSCARS.

GENI Software

Figure below demonstrates the proposed ExoGENI ORCA software deployment. Each rack has its own ORCA AM that delegates resources to the local broker (for coordinating intra-rack resource allocations of compute resources and VLANs) and to the global broker (ExoBroker), which coordinates allocation for slices spanning more than one rack. Each rack also runs an ORCA SM that exposes GENI AM API and ORCA NIaaS API to allow the allocation of resources from the rack. An ORCA AM running on the rack can stitch resources within one rack, however any stitching of resources external to the rack has to be done by GENI tools externally.

ORCA has demonstrated a powerful slice embedding and stitching engine that can take under-specified (unbound or partially bound topologies) and create global slices across multiple network providers. In order to use this engine, an additional global SM (more can be deployed later) has been deployed. This ExoSM uses the global ExoBroker to acquire resources from multiple racks as well as intermediate network providers in a coordinated fashion (including pre-negotiated VLAN tag assignment) and stitch them together into a single slice.

A global broker also receives delegations of resources from ORCA AMs controlling the intermediate network providers like Internet 2, NLR, ANI, LEARN and BEN. These resources are used by the ORCA stitching engine via ExoSM to create global slices. The ExoBroker, ExoSM and ORCA AM actors responsible for the network providers run in VMs hosted by RENCI's VMWare cluster, which provides hourly VM snapshots and a high degree of hardware redundancy.

Each rack uses several additional components external to ORCA but integrated into its operations:

  • ImageProxy – is a component that helps distribute user-created filesystem/kernel/ramdisk images to different sites. Today’s cloud software (Eucalyptus, OpenStack, xCAT) is built on a single site model in which each site has a separate image repository from which compute instances are booted. When multiple sites are involved, a user must somehow specify which image is to be used and the image must be registered with the selected sites. ImageProxy fulfills this function by allowing the user to specify a URL of the image descriptor meta-file and its hash (for security purposes). When ORCA processes a slice request and decides on a slice binding to particular sites, the ImageProxies at those sites download and register the user image based on a URL of the metafile so the user’s image can be booted on compute slivers within the slice.
  • Shorewall DNAT Proxy – is a component that helps sites with limited public IP address availability to proxy services on TCP/UDP ports running on compute slivers using a single IP address. Its operation is fully integrated with ORCA and is configurable by the site operator.
  • NodeAgent2 - a component that makes it possible to place remote calls onto substrates. Designed with OSCARS and NSI support in mind, it works for other types of substrates as well.
  • FlowVisor – the software provided by Stanford for creating OpenFlow slices. ORCA communicates with FlowVisor directly via its XMLRPC interface.
  • RSpec/NDL conversion service used by all ORCA SMs to convert RSpec requests to NDL and NDL manifests into RSpec. To avoid susceptibility to central failures, several instances of converter have been deployed at multiple sites and all SMs are configured to use them in a Round-Robin fashion for both load-balancing and redundancy.

ExoGENI also runs global several components:

  • ORCA Actor Registry - a secure service that allows distributed ExoGENI ORCA actors to recognize each other and create security associations in order for them to communicate. All active actors are listed in the web view and an actor requires ExoGENI operations staff approval in order to start communicating with other actors. ORCA actors in each rack are manually configured to recognize each other and use the registry to find actors in other racks and the global actors.
  • ORCA Image Registry - a simple image listing service available as a web page and XMLRPC service (for automated tools) that lists well-known images created by ExoGENI users.

GENI Integration

Since internal ORCA inter-actor APIs (APIs used to communicate between ORCA AMs, brokers and SMs) differ significantly from GENI API, it is ORCA SMs that create the GENI AM API compatibility layer. ORCA SMs implement the GENI AM API XMLRPC interface for the users, while speaking ORCA native APIs on the back end. They also perform the necessary RSpec-to-NDL conversions between GENI RSpec and ORCA’s internal semantic NDL-OWL resource representations. This approach allows ExoGENI to evolve its architecture while maintaining compatibility with the GENI standards.

For interoperability with the traditional model of using OpenFlow in GENI, each rack runs an instance of FOAM. Manual approval of FOAM slices is performed by the GPO or their delegate.

Navigation

Navigation
Print/export