Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:software:start [2012/04/16 11:08]
ibaldin
public:software:start [2018/04/23 09:44] (current)
ibaldin [GENI Software]
Line 1: Line 1:
 +{{indexmenu_n>50}}
 ====== ExoGENI Software ====== ====== ExoGENI Software ======
  
-The testbed offers multiple levels of provisioning interfaces for user access and resource management, including standard cloud interfaces (EC2 and xCAT), OpenFlow, and layered GENI control and monitoring functions. One goal is flexible, automated deployment of customized software stacks on shared servers, with secure isolation and manageable quality of service. We expect the majority of users to rely on virtualization, as this offers higher degrees of freedom in the choice of the OS, kernel and filesystem. It will also have a bare-metal imaging capability based on xCAT provisioning tool (open-source xCAT, developed and maintained by IBM) with a small number of vetted images. +The testbed offers multiple levels of provisioning interfaces for user access and resource management, including standard cloud interfaces (EC2 and xCAT), OpenFlow, and layered GENI control and monitoring functions. One goal is flexible, automated deployment of customized software stacks on shared servers, with secure isolation and manageable quality of service. We expect the majority of users to rely on virtualization, as this offers higher degrees of freedom in the choice of the OS, kernel and filesystem. It also has a bare-metal imaging capability based on xCAT provisioning tool (open-source xCAT, developed and maintained by IBM) with a small number of vetted images. 
  
-Figure below shows the ExoGENI software stack. For compute element provisioning it will use xCAT (bare-metal instances) and OpenStack (virtualized instances). To support OpenFlow, an instance of FlowVisor will be running on the head node such that both FOAM and ORCA can communicate with it to create slices. ORCA will also directly communicate with the OpenFlow switch, once the hybrid mode (a mode combining a traditional switch operation with OpenFlow) is implemented by the vendor.+Figure below shows the ExoGENI software stack. For compute element provisioning it uses xCAT (bare-metal instances) and OpenStack (virtualized instances). To support OpenFlow, an instance of FlowVisor is running on the head node such that both FOAM and ORCA can communicate with it to create slices. ORCA directly communicates with the Layer 2 switch to support VLAN-based topology creation (without OpenFlow)
 + 
 +The rack is capable of exposing multiple programmatic interfaces: 
 +  * [[http://groups.geni.net/geni/wiki/GeniApi | GENI AM API]] (ORCA and FOAM) 
 +  * [[https://geni-orca.renci.org/trac/wiki/orca-for-experimenter | ORCA NIaaS API]] (ORCA-specific interface similar to GENI AM API) 
 +  * [[http://aws.amazon.com/documentation/ec2/ | EC2]] exposed by OpenStack 
 +  * [[http://occi-wg.org/ | OCCI]] adapter can be installed on top of OpenStack
  
 {{:public:software:exogeni-software.png?500|}} {{:public:software:exogeni-software.png?500|}}
Line 9: Line 16:
 ====== Resource monitoring ====== ====== Resource monitoring ======
  
-Nagios – an established, versatile open-source monitoring software suite will be used as a low-level monitoring solution for operations staff (it can also be used to feed GENI Instrumentation and Measurement). We have demonstrated an initial implementation that showed how a number of Nagios instances (one from each rack) can be aggregated into a single view for operations staff, in order for them to monitor the health of individual resources and instantiated slivers (VMs) in each rack in an easy-to-understand fashion. This model permits RENCI staff (and potentially GPO staff and GMOC) to view the health of each rack, while on-site staff can view the health of just their rack. +Nagios – an established, versatile open-source monitoring software suite is used as a low-level monitoring solution for operations staff (it can also be used to feed GENI Instrumentation and Measurement). number of Nagios instances (one from each rack) are aggregated into a single view for operations staff, in order for them to monitor the health of individual resources and instantiated slivers (VMs) in each rack in an easy-to-understand fashion. This model permits RENCI staff (and potentially GPO staff and GMOC) to view the health of each rack, while on-site staff can view the health of just their rack. 
  
 Nagios collects information on most common performance metrics (CPU, memory, disk utilization, network traffic, temperature readings). The IBM x3650 M3/M4 server family has extensive probes for server health monitoring (including power consumption) which we will work to enable and expose via Nagios and to GENI users. Nagios collects information on most common performance metrics (CPU, memory, disk utilization, network traffic, temperature readings). The IBM x3650 M3/M4 server family has extensive probes for server health monitoring (including power consumption) which we will work to enable and expose via Nagios and to GENI users.
 +
 +Monitoring information is also sent to UKY using the [[https://github.com/RENCI-NRIG/blowhole | blowhole (repo is private, however code is public, available upon request)]] software which can subscribe to slice event notifications distributed by multiple ORCA controllers over XMPP and perform a [[private:configuration:xmpp | variety of tasks]] based on its configuration. 
  
 ====== GENI Resources ====== ====== GENI Resources ======
Line 18: Line 27:
  
 An ORCA AM is a generic ORCA server configured with local policies and plug-in handler scripts to control the aggregate’s resources or invoke the underlying IaaS interfaces to create and manipulate slivers. The initial ExoGENI deployment includes four kinds of aggregates offering network services: An ORCA AM is a generic ORCA server configured with local policies and plug-in handler scripts to control the aggregate’s resources or invoke the underlying IaaS interfaces to create and manipulate slivers. The initial ExoGENI deployment includes four kinds of aggregates offering network services:
-  * Cloud sites. A cloud site AM exposes a slivering service to instantiate virtual machines (VMs) on its hosts and virtual links (VLANs) over its internal network. An ORCA cloud AM includes a handler plugin to invoke an EC2-compatible IaaS cloud service such as Eucalyptus or OpenStack. The handler also invokes an extension to the cloud interface with a command set to instantiate interfaces on VMs when they are requested, stitch interfaces to adjacent virtual links, and configure interface properties such as a layer-3 address and netmask. This extension is known as [[https://geni-orca.renci.org/trac/wiki/NEuca-overview |“NEuca”]]: we first implemented it for Eucalyptus, but have since ported it to [[OpenStack. For bare metal provisioning we will rely on xCAT; ORCA plugin for xCAT is currently under development.+  * Cloud sites. A cloud site AM exposes a slivering service to instantiate virtual machines (VMs) on its hosts and virtual links (VLANs) over its internal network. An ORCA cloud AM includes a handler plugin to invoke an EC2-compatible IaaS cloud service such as Eucalyptus or OpenStack. The handler also invokes an extension to the cloud interface with a command set to instantiate interfaces on VMs when they are requested, stitch interfaces to adjacent virtual links, and configure interface properties such as a layer-3 address and netmask. This extension is known as [[https://geni-orca.renci.org/trac/wiki/NEuca-overview |“NEuca”]]: we first implemented it for Eucalyptus, but have since ported it to [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=CloudBling | OpenStack]]. For bare metal provisioning we rely on xCAT.
   * Native [[http://ben.renci.org |ORCA-BEN]] circuit service. The AM for the Breakable Experimental Network (BEN) offers a multi-layer circuit service. For ExoGENI, it provides Ethernet pipes: point-to-point VLANs between pairs of named Ethernet interfaces in the BEN substrate. It uses a suite of ORCA plugins, including NDL-OWL queries to plan the paths from a substrate model. The handler scripts for BEN manage paths by forming and issuing commands to switch devices over the BEN management network.   * Native [[http://ben.renci.org |ORCA-BEN]] circuit service. The AM for the Breakable Experimental Network (BEN) offers a multi-layer circuit service. For ExoGENI, it provides Ethernet pipes: point-to-point VLANs between pairs of named Ethernet interfaces in the BEN substrate. It uses a suite of ORCA plugins, including NDL-OWL queries to plan the paths from a substrate model. The handler scripts for BEN manage paths by forming and issuing commands to switch devices over the BEN management network.
   * External circuit services. For these services, the AM invokes a provider’s native provisioning APIs to request and manipulate circuits. The AM authenticates with its own identity as a customer of the provider. A circuit is a pipe between named Ethernet interfaces on the provider’s network. We have developed ORCA plugins for NLR’s Sherpa FrameNet service, Internet2 ION, and the OSCARS circuit reservation service used in ESNet.   * External circuit services. For these services, the AM invokes a provider’s native provisioning APIs to request and manipulate circuits. The AM authenticates with its own identity as a customer of the provider. A circuit is a pipe between named Ethernet interfaces on the provider’s network. We have developed ORCA plugins for NLR’s Sherpa FrameNet service, Internet2 ION, and the OSCARS circuit reservation service used in ESNet.
Line 27: Line 36:
 ====== GENI Software ====== ====== GENI Software ======
  
-Figure below demonstrates the proposed ExoGENI ORCA software deployment. Each rack will have its own ORCA AM that delegates resources to the local broker (for coordinating intra-rack resource allocations of compute resources and VLANs) and to the global broker (ExoBroker), which coordinates allocation for slices spanning more than one rack. Each rack will also run an ORCA SM that will expose GENI AM API to allow the allocation of resources from the rack. An ORCA AM running on the rack can stitch resources within one rack, however any stitching of resources external to the rack has to be done by GENI tools externally.+Figure below demonstrates the proposed ExoGENI ORCA software deployment. Each rack has its own ORCA AM that delegates resources to the local broker (for coordinating intra-rack resource allocations of compute resources and VLANs) and to the global broker (**ExoBroker**), which coordinates allocation for slices spanning more than one rack. Each rack also runs an ORCA SM that exposes GENI AM API and ORCA NIaaS API to allow the allocation of resources from the rack. //An ORCA AM running on the rack can stitch resources within one rack, however any stitching of resources external to the rack has to be done by GENI tools externally.//
  
-ORCA has demonstrated a powerful slice embedding and stitching engine that can take under-specified (unbound or partially bound topologies) and create global slices across multiple network providers. In order to use this engine, an additional global SM (more can be deployed later) will be deployed. This ExoSM will use the global ExoBroker to acquire resources from multiple racks as well as intermediate network providers in a coordinated fashion (including pre-negotiated VLAN tag assignment) and stitch them together into a single slice. +ORCA has demonstrated a powerful slice embedding and stitching engine that can take under-specified (unbound or partially bound topologies) and create global slices across multiple network providers. In order to use this engine, an additional global SM (more can be deployed later) has been deployed. This **ExoSM** uses the global **ExoBroker** to acquire resources from multiple racks as well as intermediate network providers in a coordinated fashion (including pre-negotiated VLAN tag assignment) and stitch them together into a single slice. 
  
-A global broker will also receive delegations of resources from ORCA AMs controlling the intermediate network providers like Internet 2, NLR, ANI, LEARN and BEN. These resources will be used by the ORCA stitching engine via ExoSM to create global slices. The ExoBroker, ExoSM and ORCA AM actors responsible for the network providers will run in VMs hosted by RENCI's VMWare cluster, which provides hourly VM snapshots and a high degree of hardware redundancy.+A global broker also receives delegations of resources from ORCA AMs controlling the intermediate network providers like Internet 2, NLR, ANI, LEARN and BEN. These resources are used by the ORCA stitching engine via ExoSM to create global slices. The ExoBroker, ExoSM and ORCA AM actors responsible for the network providers run in VMs hosted by RENCI's VMWare cluster, which provides hourly VM snapshots and a high degree of hardware redundancy.
  
 {{:public:software:geni-software.png?400|}} {{:public:software:geni-software.png?400|}}
  
-Each rack will include several additional components external to ORCA but integrated into its operations:+Each rack uses [[public:software:aux_infrastructure:start | several additional components]] external to ORCA but integrated into its operations:
   * [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=ImageProxy | ImageProxy]] – is a component that helps distribute user-created filesystem/kernel/ramdisk images to different sites. Today’s cloud software (Eucalyptus, OpenStack, xCAT) is built on a single site model in which each site has a separate image repository from which compute instances are booted. When multiple sites are involved, a user must somehow specify which image is to be used and the image must be registered with the selected sites. ImageProxy fulfills this function by allowing the user to specify a URL of the image descriptor meta-file and its hash (for security purposes). When ORCA processes a slice request and decides on a slice binding to particular sites, the ImageProxies at those sites download and register the user image based on a URL of the metafile so the user’s image can be booted on compute slivers within the slice.   * [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=ImageProxy | ImageProxy]] – is a component that helps distribute user-created filesystem/kernel/ramdisk images to different sites. Today’s cloud software (Eucalyptus, OpenStack, xCAT) is built on a single site model in which each site has a separate image repository from which compute instances are booted. When multiple sites are involved, a user must somehow specify which image is to be used and the image must be registered with the selected sites. ImageProxy fulfills this function by allowing the user to specify a URL of the image descriptor meta-file and its hash (for security purposes). When ORCA processes a slice request and decides on a slice binding to particular sites, the ImageProxies at those sites download and register the user image based on a URL of the metafile so the user’s image can be booted on compute slivers within the slice.
-  * [[https://geni-orca.renci.org/trac/wiki/shorewall-with-orca | Shorewall DNAT Proxy]] – is component that helps sites with limited public IP address availability to proxy services on TCP/UDP ports running on compute slivers using a single IP address. Its operation is fully integrated with ORCA and is configurable by the site operator.  +  * [[https://geni-orca.renci.org/trac/wiki/shorewall-with-orca | Shorewall DNAT Proxy]] – is an optional component that helps sites with limited public IP address availability to proxy services on TCP/UDP ports running on compute slivers using a single IP address. Its operation is fully integrated with ORCA and is configurable by the site operator.  
-  * FlowVisor – the software provided by Stanford for creating OpenFlow slices. ORCA will communicate with FlowVisor directly via its XMLRPC interface. +  * [[https://github.com/RENCI-NRIG/nodeagent2 | NodeAgent2]] - a component that makes it possible to place remote calls onto substrates. Designed with OSCARS and NSI support in mind, it works for other types of substrates as well
-  * RSpec/NDL conversion service used by all ORCA SMs to convert RSpec requests to NDL and NDL manifests into RSpec. To avoid susceptibility to central failures, several instances of converter will be deployed at multiple sites and all SMs will be configured to use them in a Round-Robin fashion for both load-balancing and redundancy+  * FlowVisor – the software provided by Stanford for creating OpenFlow slices. ORCA communicates with FlowVisor directly via its XMLRPC interface. 
- +  * RSpec/NDL conversion service used by all ORCA SMs to convert RSpec requests to NDL and NDL manifests into RSpec. To avoid susceptibility to central failures, several instances of converter have been deployed at multiple sites and all SMs are configured to use them in a Round-Robin fashion for both load-balancing and redundancy.
-ExoGENI will also run global components: +
-  * ORCA Actor Registry - a secure service that allows distributed ExoGENI ORCA actors to recognize each other and create security associations in order for them to communicate. All active actors are listed in the web view and an actor requires ExoGENI operations staff approval in order to start communicating with other actors. ORCA actors in each rack will be manually configured to recognize each other and will use the registry to find actors in other racks and the global actors.+
  
 +ExoGENI also runs global several components:
 +  * ORCA Actor Registry - a secure service that allows distributed ExoGENI ORCA actors to recognize each other and create security associations in order for them to communicate. All active actors are listed in the web view and an actor requires ExoGENI operations staff approval in order to start communicating with other actors. ORCA actors in each rack are manually configured to recognize each other and use the registry to find actors in other racks and the global actors.
 +  * [[http://geni.renci.org:15080/registry/images.jsp | ORCA Image Registry]] - a simple image listing service available as a web page and XMLRPC service (for automated tools) that lists well-known images created by ExoGENI users.
 ====== GENI Integration ====== ====== GENI Integration ======
  
-Since internal ORCA inter-actor APIs (APIs used to communicate between ORCA AMs, brokers and SMs) are ticket-based and operate using signed SOAP messages and not XMLRPC, it is ORCA SMs that create the GENI AM API compatibility layer. ORCA SMs implement the GENI AM API XMLRPC interface for the users, while speaking ORCA APIs on the back end. They also perform the necessary RSpec-to-NDL conversions between GENI RSpec and ORCA’s internal semantic NDL-OWL resource representations. This approach allows ExoGENI to evolve its architecture while maintaining compatibility with the GENI standards. +Since internal ORCA inter-actor APIs (APIs used to communicate between ORCA AMs, brokers and SMs) differ significantly from GENI API, it is ORCA SMs that create the GENI AM API compatibility layer. ORCA SMs implement the GENI AM API XMLRPC interface for the users, while speaking ORCA native APIs on the back end. They also perform the necessary RSpec-to-NDL conversions between GENI RSpec and ORCA’s internal semantic NDL-OWL resource representations. This approach allows ExoGENI to evolve its architecture while maintaining compatibility with the GENI standards. 
  
-For interoperability with the traditional model of using OpenFlow in GENI, each rack will run an instance of FOAM. Manual approval of FOAM slices will be performed by the GPO or their delegate. +For interoperability with the traditional model of using OpenFlow in GENI, each rack runs an instance of FOAM. Manual approval of FOAM slices is performed by the GPO or their delegate. 
  
 ====== Navigation ====== ====== Navigation ======
 {{indexmenu>.#2|js#doku tsort nsort rsort nocookie}} {{indexmenu>.#2|js#doku tsort nsort rsort nocookie}}
Navigation
Print/export