Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:software:start [2013/01/16 14:29]
ibaldin [ExoGENI Software]
public:software:start [2018/04/23 09:44] (current)
ibaldin [GENI Software]
Line 20: Line 20:
 Nagios collects information on most common performance metrics (CPU, memory, disk utilization, network traffic, temperature readings). The IBM x3650 M3/M4 server family has extensive probes for server health monitoring (including power consumption) which we will work to enable and expose via Nagios and to GENI users. Nagios collects information on most common performance metrics (CPU, memory, disk utilization, network traffic, temperature readings). The IBM x3650 M3/M4 server family has extensive probes for server health monitoring (including power consumption) which we will work to enable and expose via Nagios and to GENI users.
  
-Monitoring information is also sent to GMOC.+Monitoring information is also sent to UKY using the [[https://github.com/RENCI-NRIG/blowhole | blowhole (repo is private, however code is public, available upon request)]] software which can subscribe to slice event notifications distributed by multiple ORCA controllers over XMPP and perform a [[private:configuration:xmpp | variety of tasks]] based on its configuration
  
 ====== GENI Resources ====== ====== GENI Resources ======
Line 46: Line 46:
 Each rack uses [[public:software:aux_infrastructure:start | several additional components]] external to ORCA but integrated into its operations: Each rack uses [[public:software:aux_infrastructure:start | several additional components]] external to ORCA but integrated into its operations:
   * [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=ImageProxy | ImageProxy]] – is a component that helps distribute user-created filesystem/kernel/ramdisk images to different sites. Today’s cloud software (Eucalyptus, OpenStack, xCAT) is built on a single site model in which each site has a separate image repository from which compute instances are booted. When multiple sites are involved, a user must somehow specify which image is to be used and the image must be registered with the selected sites. ImageProxy fulfills this function by allowing the user to specify a URL of the image descriptor meta-file and its hash (for security purposes). When ORCA processes a slice request and decides on a slice binding to particular sites, the ImageProxies at those sites download and register the user image based on a URL of the metafile so the user’s image can be booted on compute slivers within the slice.   * [[https://code.renci.org/gf/project/networkedclouds/wiki/?pagename=ImageProxy | ImageProxy]] – is a component that helps distribute user-created filesystem/kernel/ramdisk images to different sites. Today’s cloud software (Eucalyptus, OpenStack, xCAT) is built on a single site model in which each site has a separate image repository from which compute instances are booted. When multiple sites are involved, a user must somehow specify which image is to be used and the image must be registered with the selected sites. ImageProxy fulfills this function by allowing the user to specify a URL of the image descriptor meta-file and its hash (for security purposes). When ORCA processes a slice request and decides on a slice binding to particular sites, the ImageProxies at those sites download and register the user image based on a URL of the metafile so the user’s image can be booted on compute slivers within the slice.
-  * [[https://geni-orca.renci.org/trac/wiki/shorewall-with-orca | Shorewall DNAT Proxy]] – is component that helps sites with limited public IP address availability to proxy services on TCP/UDP ports running on compute slivers using a single IP address. Its operation is fully integrated with ORCA and is configurable by the site operator. +  * [[https://geni-orca.renci.org/trac/wiki/shorewall-with-orca | Shorewall DNAT Proxy]] – is an optional component that helps sites with limited public IP address availability to proxy services on TCP/UDP ports running on compute slivers using a single IP address. Its operation is fully integrated with ORCA and is configurable by the site operator.  
 +  * [[https://github.com/RENCI-NRIG/nodeagent2 | NodeAgent2]] - a component that makes it possible to place remote calls onto substrates. Designed with OSCARS and NSI support in mind, it works for other types of substrates as well.
   * FlowVisor – the software provided by Stanford for creating OpenFlow slices. ORCA communicates with FlowVisor directly via its XMLRPC interface.   * FlowVisor – the software provided by Stanford for creating OpenFlow slices. ORCA communicates with FlowVisor directly via its XMLRPC interface.
   * RSpec/NDL conversion service used by all ORCA SMs to convert RSpec requests to NDL and NDL manifests into RSpec. To avoid susceptibility to central failures, several instances of converter have been deployed at multiple sites and all SMs are configured to use them in a Round-Robin fashion for both load-balancing and redundancy.   * RSpec/NDL conversion service used by all ORCA SMs to convert RSpec requests to NDL and NDL manifests into RSpec. To avoid susceptibility to central failures, several instances of converter have been deployed at multiple sites and all SMs are configured to use them in a Round-Robin fashion for both load-balancing and redundancy.
Line 52: Line 53:
 ExoGENI also runs global several components: ExoGENI also runs global several components:
   * ORCA Actor Registry - a secure service that allows distributed ExoGENI ORCA actors to recognize each other and create security associations in order for them to communicate. All active actors are listed in the web view and an actor requires ExoGENI operations staff approval in order to start communicating with other actors. ORCA actors in each rack are manually configured to recognize each other and use the registry to find actors in other racks and the global actors.   * ORCA Actor Registry - a secure service that allows distributed ExoGENI ORCA actors to recognize each other and create security associations in order for them to communicate. All active actors are listed in the web view and an actor requires ExoGENI operations staff approval in order to start communicating with other actors. ORCA actors in each rack are manually configured to recognize each other and use the registry to find actors in other racks and the global actors.
 +  * [[http://geni.renci.org:15080/registry/images.jsp | ORCA Image Registry]] - a simple image listing service available as a web page and XMLRPC service (for automated tools) that lists well-known images created by ExoGENI users.
 ====== GENI Integration ====== ====== GENI Integration ======
  
-Since internal ORCA inter-actor APIs (APIs used to communicate between ORCA AMs, brokers and SMs) are ticket-based and operate using signed SOAP messages and not XMLRPC, it is ORCA SMs that create the GENI AM API compatibility layer. ORCA SMs implement the GENI AM API XMLRPC interface for the users, while speaking ORCA APIs on the back end. They also perform the necessary RSpec-to-NDL conversions between GENI RSpec and ORCA’s internal semantic NDL-OWL resource representations. This approach allows ExoGENI to evolve its architecture while maintaining compatibility with the GENI standards. +Since internal ORCA inter-actor APIs (APIs used to communicate between ORCA AMs, brokers and SMs) differ significantly from GENI API, it is ORCA SMs that create the GENI AM API compatibility layer. ORCA SMs implement the GENI AM API XMLRPC interface for the users, while speaking ORCA native APIs on the back end. They also perform the necessary RSpec-to-NDL conversions between GENI RSpec and ORCA’s internal semantic NDL-OWL resource representations. This approach allows ExoGENI to evolve its architecture while maintaining compatibility with the GENI standards. 
  
 For interoperability with the traditional model of using OpenFlow in GENI, each rack runs an instance of FOAM. Manual approval of FOAM slices is performed by the GPO or their delegate.  For interoperability with the traditional model of using OpenFlow in GENI, each rack runs an instance of FOAM. Manual approval of FOAM slices is performed by the GPO or their delegate. 
Navigation
Print/export