OPEN-O Multiple VIM Driver Project Use Cases

Slides:



Advertisements
Similar presentations
© 2012 IBM Corporation Architecture of Quantum Folsom Release Yong Sheng Gong ( 龚永生 ) gongysh #openstack-dev Quantum Core developer.
Advertisements

PlanetLab Operating System support* *a work in progress.
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
FileSecure Implementation Training Patch Management Version 1.1.
OM. Brad Gall Senior Consultant
Implementing ISA Server Publishing. Introduction What Are Web Publishing Rules? ISA Server uses Web publishing rules to make Web sites on protected networks.
Mainframe (Host) - Communications - User Interface - Business Logic - DBMS - Operating System - Storage (DB Files) Terminal (Display/Keyboard) Terminal.
BoF: Open NFV Orchestration using Tacker
Intro to Datazen.
Rights Management for Shared Collections Storage Resource Broker Reagan W. Moore
DICOMwebTM 2015 Conference & Hands-on Workshop University of Pennsylvania, Philadelphia, PA September 10-11, 2015 DICOMweb Workflow API (UPS-RS) Jonathan.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
OPEN-O Use Case Design Residential Scenario. Consumer Story  Kaylin is a residential broadband subscriber of CMCC.  Her boy is 8 years old, and begins.
1 Terminal Management System Usage Overview Document Version 1.1.
SDN-O LCM for Mercury Release Key Points and Overview
ONAP E2E Flow `.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Open-O SFC.Mgr Proposal
ONAP Management Requirements
Microservice Powered Orchestration
REDCap General Overview
OpenStack.
Rationalizing ONAP Architecture for R2 and Beyond Vimal Begwani – AT&T
ONAP Architecture Meeting 8/8
Palo Alto Networks Certified Network Security Engineer
Orchestration and Controller Alignment for ONAP Release 1
ONAP Architecture Slides Current Plan of Record
The Intranet.
Processes and threads.
Multi-VIM/Cloud High Level Architecture
OPEN-O GS-O Planning Mercury Release
OPEN-O Sun Release Lab Deployment & Assembly
UCS Director: Tenant Onboarding
Section 13 - Integrating with Third Party Tools
ONAP Architecture Meeting 8/8
OPEN-O Modeling Directions (DRAFT 0)
Enterprise Hybrid Cloud
ARC 5: Deployment Options Chris Donley
MEF LSO Legato SDK 24 October 2017 Andy Mayer, Ph.D. Tara Cummings.
ONAP Integration to External Domain Management Systems (DMS)
Multi-VIM/Cloud High Level Architecture
EdgeX System Management Nov 6th 2017
ONAP Run-time Catalog Project
Centralize Image Management for ONAP
VF-C R2 Feature Planning & Implementation Yan Yang
Tomi Juvonen Software Architect, Nokia
OpenStack Ani Bicaku 18/04/ © (SG)² Konsortium.
SDN use case 1: VPN Fengkai Li.
Distributed Mobility Management (DMM) WG DMM Work Item: Forwarding Path & Signaling Management (FPSM) draft-ietf-dmm-fpc-cpdp-01.txt IETF93, Prague.
Isasku, Srini, Alex, Ramki, Seshu, Bin Hu, Munish, Gil, Victor
Documenting ONAP components (functional)
Routing and Switching Essentials v6.0
Multi-VIM/Cloud High Level Architecture
An example design for an Amadeus APIv2 Web Server Application
Dynamic SFC from Tacker to incept specific traffic of VM
Management and Orchestration in Complex and Dynamic Environment
ONAP Beijing Architecture Chris Donley 1/9/18
IFA007: VNF LCM The Or-Vnfm reference point is used for exchanges between Network Functions Virtualization Orchestrator (NFVO) and Virtualized Network.
Mix & Match: Resource Federation
Lecture Topics: 11/1 General Operating System Concepts Processes
ONAP & ETSI NFV converged architecture
* Introduction to Cloud computing * Introduction to OpenStack * OpenStack Design & Architecture * Demonstration of OpenStack Cloud.
NFV adhoc Shitao li.
PHP Forms and Databases.
ONAP Service Capability Management
Robert Down & Pranay Sadarangani Nov 8th 2011
Harrison Howell CSCE 824 Dr. Farkas
GNFC Architecture and Interfaces
Figure 3-2 VIM-NFVI acceleration management architecture
Proposed Approach for ONAP Runtime Support of Network Service Onboarding Gil Bullard, AT&T.
Presentation transcript:

OPEN-O Multiple VIM Driver Project Use Cases Version 0.2 Draft – For Review

Goals Figure out interaction logics among VNFMs, Res.Mgr. and Multi VIM driver, as well as other related entities within OPEN-O architecture. Define Northbound Interfaces of Multi VIM driver.

Arch. of Multi VIM driver Specific VIM driver runs as standalone microservices. VIM broker runs as standalone microservice. Expose Restful APIs to Res. Mgr. and NFVMs Redirect VIM requests to specific VIM driver micro services. VIM instances are registered by VIM administrators to Ext. System registration via OPEN-O portal.

Arch. of Multi VIM driver Newton OpenStack VMware VIO G-VNFM Kilo OpenStack Micro-Services Micro-Service Framework VIM Driver Kilo VIM Driver Newton VIM Driver VIO API Gateway VIM Broker VIM instance <-> driver caching Ext. Sys. Registering Existing Functionality Project Scope

Use case 1: VIM instance resource reporting GUI Portal Ext. Sys. NFVO Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. VIMx instance n Registration 2. Query VIM instance n resources 3. Query VIM n resource: e.g. get /openoapi/vimbroker/v1/{n}/hosts/ 4. Get metadata of VIM n if metadata cache miss 5. Return metadata of VIM n: type: VIMx, VIMn url, etc 6. Query VIM n resource: get /openoapi/vimdriver/{VIMx}/v1/hosts 7. Query VIM n resource 8. Return resource info 9. Caching VIM n metadata, token, etc 10. Return resource info 11. Return resource info 13. Response with Success or Failure 1, ‘VIM instance n’ stands for VIM instance ID which is assigned by OPEN-O on registration on Ext. System Registration 2, VIM instance resource may includes hosts CPU, memory limits disk limits Volume limits provider network tenant network project(tenant)? 3, “list” operations on all of resources should support “marker” and “limit” to enable list of objects returned in paged size approach, “list” operation should support “filters” to filtering objects of interested. 4, VIM instance registration different VIM instance need to register with different set of information while part of them could be abstracted. ESR should allow this variation by : ether leave a multi-line entry box to let user inputs with multiple pairs of key-value and simply store them without validating them, or let VIM driver to expose some kind of web page to let users input interactively and check constraints over user’s inputs on registration. That part of VIM instance information will be interpreted only be the corresponding VIM driver, but NFVO Res. Mgr or gVNFM should store them and pass them down to VIM driver again later while they are trying to operate on that VIM instance.

Use case 2: overall process to deploy VNF NFVO LCM NFVO Res. Mgr. gVNFM VIM Broker 1. Create image on VIM instance n 2. Query image status on VIM instance n 3, Notify image information on VIM n 4. Create network/subnet for VLs of NSD on VIM instance n 5. Create subnet for VLs of NSD on VIM instance n 6. Create Vrouter for VLs of NSD on VIM instance n 7, Notify network/subnet/VRounter information on VIM n 8. Deploy VNFs of NSD on VIM instance n 9. Request resources granting on VIM instance n 10. Create volume on VIM instance n 11. Query volume status on VIM instance n 12. Create network/subnet for VLs of VNFD on VIM instance n 13. Create port on VIM instance n 14. Create server on VIM instance n Questions: 1, what are the VL of NSD? Is it a Layer 3 Link or Layer 2 Link? Any designated IP/Gateway etc. for that VL from NSD? 2, Should the flavor be exposed as a standalone API to gVNFM? The initial proposal is that flavor will be invisible to gVNFM, so the parameters will be passed down while creating server. However, is it more reasonable to expose flavor management APIs to gVNFM/Res. Mgr ? 15. Query server status on VIM instance n

Use case 3: overall process to terminate VNF NFVO LCM NFVO Res. Mgr. gVNFM VIM Broker 1. Terminate VNFs of NSD on VIM instance n 2. Stop server on VIM instance n 3. Query server status on VIM instance n 4. Delete port on VIM instance n 5. Delete network/subnet for VLs of VNFD on VIM instance n 6. Delete volume on VIM instance n 7. Query volume status on VIM instance n 8. Release the granted resources on VIM instance n 9. Delete VRouter for VLs of NSD on VIM instance n 10. Delete subnet for VLs of NSD on VIM instance n 11. Delete network for VLs of NSD on VIM instance n 12, Notify network/subnet/VRouter information on VIM n 13. Delete image on VIM instance n 14. Query image status on VIM instance n 15, Notify image information on VIM n

Use case 4.1: create image (async mode) catalog NFVO LCM. Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. Retrieve URL of image 2. Create image on VIM instance n: post /openoapi/vimbroker/v1/{n}/image 3. Create image on VIM n : post /openoapi/vimdriver/{VIMx}/v1/image 4, create image on VIM n 5. Return image UUID immediately 6. Caching image information, for VIM n 7. Return resource info 8. Response with Success or Failure 12. Download image file 13, upload image file and create image on VIM n Loops 9, Query image uploading progressto VIM n 10. Query image uploading progress on VIM n 11, Query image uploading progress on VIM n Questions: 1, where is the image stored within OPEN-O? onboarding of VNF package will trigger NFVO LCM create image, LCM retrieve images file (get the URL from catalog), and upload to VIM instance via VIM driver. 2, Can the image be retrieved across micro-services? Yes 3, create image will be implemented in the async mode: VIM driver return with Image UUID as soon as possible, while it spawn/delegate another thread to download image file from catalog and then upload to VIM instance. LCM will query the image upload progress repeatly. 14, Notify image information to VIM n once image uploading is done

Use case 5.1: NFVO LCM create network/subnet/Vrouter NFVO Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. Create Vrouter on VIM instance n: post /openoapi/vimbroker/v1/{n}/network 2. Create Vrouter on VIM n : post /openoapi/vimdriver/{VIMx}/v1/vrouter 3, create Vrouter on VIM n 4. Return Vrouter UUID 5. Return resource info 6. Response with Success or Failure 7, create network on VIM n 8. Create network on VIM n : post /openoapi/vimdriver/{VIMx}/v1/network 9, create network on VIM n 10. Return network UUID 11. Return resource info 12. Return network UUID 13, create subnet on VIM n 14. Create subnet on VIM n : post /openoapi/vimdriver/{VIMx}/v1/subnet 15, create subnet on VIM n 16, attach subnet to network and router on VIM n 17. Return subnet UUID 18. Return resource info 19. Response with Success or Failure Questions: 1, How to provision provider network? Suggestion: report a list of provider networks to Res. Mgr. by VIM driver, the provider networks information could be registered to External System upon VIM instance registration. Resource Report item for this process: report Provider Networks to NFVO Res. Mgr. 2, The difference between creating networks by NFVO LCM and gVNFM NFVO LCM is creating networks to underlay the VLs between VNFs. So it might span across different VIM instances gVNFM is creating networks to connecting VDUs of the same VNF, and all of them will be on the same VIM instance. The question is: will the VL between VNFs be a Layer 3 Link? With Designated IP? Or Layer 2 Link? 3, Should public network uuid be passed to gVNFM? If gVNFM need to create and associate Floating IP, then this is mandatory. 20, Notify network information on VIM n

Use case 5.2: gVNFM create network/subnet gVNFM Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. Create network on VIM instance n: post /openoapi/vimbroker/v1/network 2. Create network on VIM n : post /openoapi/vimdriver/{VIMx}/v1/network 3, create network on VIM n 4. Return network UUID 5. Return resource info 6. Response with Success or Failure 7. Create network on VIM instance n: post /openoapi/vimbroker/v1/network 8. Create network on VIM n : post /openoapi/vimdriver/{VIMx}/v1/network 9, create subnet on VIM n 10. Return subnet UUID 11. Return resource info 12. Response with Success or Failure 13, Notify network/subnet information on VIM n Questions: 1, How to provision provider network? Suggestion: report a list of provider networks to Res. Mgr. by VIM driver, the provider networks information could be registered to External System upon VIM instance registration. Resource Report item for this process: report Provider Networks to NFVO Res. Mgr. 2, The difference between creating networks by NFVO LCM and gVNFM NFVO LCM is creating networks to underlay the VLs between VNFs. So it might span across different VIM instances gVNFM is creating networks to connecting VDUs of the same VNF, and all of them will be on the same VIM instance. The question is: will the VL between VNFs be a Layer 3 Link? With Designated IP? Or Layer 2 Link?

Use case 6.1: create port to attach to NSD VL gVNFM. gVNFM Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. Create port on VIM instance n: post /openoapi/vimbroker/v1/{n}port 2. Create port on VIM n : post /openoapi/vimdriver/{VIMx}/v1/port 3, create port on VIM n 4. Return port UUID 5. Return resource info 6. Response with Success or Failure 7, Notify port information on VIM n “Create port” is under discussion, need further conversation here. Questions: 1, What is the reason to create port explicitly? To associate floating IP to this port 2, Does the action to create port explicitly will bind server to specific compute host? The answer is NO. The port will be bound to a host only when the server is scheduled to that host.

Use case 7.1: create volume(async mode) gVNFM gVNFM Res. Mgr. VIM Broker VIMx Driver VIMx instance n 1. Request resource grant for creating volume on VIM instance n 2. Create volume on VIM instance n: post /openoapi/vimbroker/v1/{n}/volume 3. Create volume on VIM n : post /openoapi/vimdriver/{VIMx}/v1/volume 4, create volume on VIM n 5. Return volume UUID immediately 6. Caching volume information, for VIM n 7. Return resource info 8. Response with Success or Failure Loops 9, Query volume creating progress on VIM n 10. Query volume creating progress on VIM n 11, Query volume progress on VIM n Questions: 1, Will the NFVO Res. Mgr manage volume resource? 14, Notify volume information on VIM n once the creating is done

Use case 8.1: create server NFVO Res. Mgr. gVNFM VIM Broker VIMx Driver VIMx instance n 1. Request resource grant for creating server on VIM instance n 2. Resource is granted and deducted for VIM instance n 3. Create server on VIM instance n: post /openoapi/vimbroker/v1/{n}server 4. Create server on VIM n : post /openoapi/vimdriver/{VIMx}/v1/server 6, create flavor on VIM n 7. Return flavor UUID 8, if required,create volumn on VIM n 9. Return volumn UUID 10, create server on VIM n 11. Return server UUID immediately 12. Caching server information, for VIM n 13. Return resource info 14. Response with Success or Failure Loops 15. Query server creation status repeatly 16. Query server status 17. Query server status 1, pass image uuid, port uuid as well as network uuid to VIM broker to create server 2, async mode to create server

VLs for NSD and VNFD: VNF on same VIM instance VNF1’ vdu1 VL at VNFD level VNF1’ vdu2 VL at NSD level network to carry VNFD’s VL network to carry NSD’s VL public network (shared) network to carry NSD’s VL VRouter VRouter Questions: 1, are the VNF1 and VNF2 on different subnet? Need Router to forward traffic between them? Check if VL is L3 connection Network created by NFVO LCM Network created by gVNFM Network created by VIM admin

VLs for NSD and VNFD: VNF on multiple VIM instances VL at NSD level VNF1’ vdu1 network to carry VNFD’s VL VNF1’ vdu2 VRouter VRouter network to carry NSD’s VL network to carry NSD’s VL public network (shared) Tunnel over WAN public network (shared) Questions: 1, are the VNF1 and VNF2 on different subnet? Need Router to forward traffic between them? 2, is the subnet for VNFs private or public (in context of OPEN-O infrastructures, e.g. over the tunnel) Network created by NFVO LCM Network created by gVNFM Network created by VIM admin

THANK YOU Live migration of VNF could be considered on next freeze