Presentation is loading. Please wait.

Presentation is loading. Please wait.

Integrated Cloud Native NFV & App stack – Proposals Intel Contact: Srinivasa.r.addepalli@intel.com Contributors: Verizon, VMWare, MobileEdgex, Dell, ONAP.

Similar presentations


Presentation on theme: "Integrated Cloud Native NFV & App stack – Proposals Intel Contact: Srinivasa.r.addepalli@intel.com Contributors: Verizon, VMWare, MobileEdgex, Dell, ONAP."— Presentation transcript:

1 Integrated Cloud Native NFV & App stack – Proposals Intel Contact: Contributors: Verizon, VMWare, MobileEdgex, Dell, ONAP community, inputs from a retailer

2 Integrated Cloud Native NFV & App related blueprints
Value Added Services and use cases SDWAN Distributed Analytics as a service IOT framework - EdgeXFoundry Video CDN & Streaming Blue print family Integrated Cloud Native NFV & App family Blueprints Multi-Server Cloud native NFV/App stack Contributors Verizon, Intel, MobileEdgeX, Aarna Networks, VMWare, Dell

3 Background

4 Central Orchestration, Analytics and Closed loop control
What is Edge? Edges Central Orchestration, Analytics and Closed loop control Apps Apps Apps Apps Apps CPE (White box) PCF AF IaaS CaaS CU-CP UDM AUSF RU RU – Remote Radio Unit DU – Distributed Unit (5G Base Unit) CU-UP – Centralized User Plane CU-CP – Centralized Control Plane (low and high) UPF – User Plane Function SMF – Session Management Function UDM – Unified Data Management Function AUSF – Authentication Service Function Drivers 5G - High throughput, Low latency and low jitter need New revenue streams from New services Making sense of enormous data generation at device layer CU-UP/UPF SMF AMF DU POP BRAS, PE UPF Clouds (Global) Edge Cloud (tens of thousands) Core Cloud (EPC) Device Edge Enterprise Edge Network Edge ICN focus

5 Why Edge? Massive Data from/to Devices – Data Reduction to Clouds
(Upstream : Data from Drones, Autonomous vehicles, factories Downstream : 4K Streaming, AR/VR, Sports Casting, Live Gaming ) Real time and Ultra real time performance (Closed loop control in case of IOT, On-demand compute for AR/VR, Live Gaming etc…) Contextual Services (User based Services, Location based Services) Data sovereignty (GDPR and other regulations) Cost savings (Large bills from CSPs) Answer : Do compute at the Edge (Edge Computing)

6 Edge : Contrasting with the Clouds
View in slide show mode Edge : Contrasting with the Clouds Differences Learn from Clouds Ease of Use and Ease of consumption of services Constrained Resources (Cost, Power, Space) Massive Geo-Distributed Computing (Tens of thousand of Edge locations for applications) Support for all kinds of workloads (VMs, Containers, functions) Not as good physical security (Insecure locations, No guards, Regulatory concerns) Multi-Tenancy (Use compute for multiple tenants) Orchestration at scale (Infra Orchestration and App Orchestration - App deployment across multiple edge providers, across edges & clouds, App Visualization, Analytics, Connectivity) Time to Market (Support for common services across applications, but with public API to meet Cloud native requirements)

7 Enable companies to become Edge providers
View in slide show mode Target Users of ICN Enable companies to become Edge providers Targets Enterprises that have multiple offices (Retail, Hospitals, Banks, Large corporations) Telcos planning to become edge providers SaaS providers Independent providers Managed Service Providers Enable ISVs and SIs who like to add value and provide commercial support Enterprises, factories, Network Edges, Big Edges Tunable/Customizable

8 Transformation journey (to Kubernetes)
Site Site CNF VNF VNF Function Open Stack CNFs VNF CNF MS MS Compute nodes Kubernetes MS Kubernetes VNF K8S Fn Function VNF CNF MS VMs VMs Function Compute nodes CNF MS Kubernetes Compute nodes K8S for VNFs, CNFs, Micro-Services and functions Soft Multitenancy with one K8S Strict Multitenancy with K8S clusters from VMs (when required) Two different resource orchestrators Compute nodes are divided

9 Asks Operator feedback (Enterprise Edge Managed Services)
View in slide show mode Asks Operator feedback (Enterprise Edge Managed Services) Do more with less resources – Enable all accelerations, smartNIC, one resource orchestrator Soft Multitenancy – Operator workloads vs Customer workloads Sharing of resources across NFs and Apps SFC across Enterprise Edge and Network Edge Multi Cluster Orchestration Retail (Who like to manage their own clusters) One cluster for both network functions and Apps Dedicate a node for network functions Smaller processors as K8S cluster requires three machines for redundancy. Cost of cluster to be less than 2000$ Centralized Software provisioning Software/Solution Provider Centralized software provisioning High performance stack K8S for all workloads Programmable NICs Programmable switches and Switch controllers.

10 Technical Goals of ‘Integrated Cloud Native NFV & App’ BPs
View in slide show mode Co-existence of multiple deployment types (VNFs, CNFs, VMs, Containers and functions) Advanced Networking support ( Multiple networks, Provider networks, Dynamic Route/network creation, Service function chaining) Soft and Strict Multi-tenancy AI based Predictive placement (Collection using Prometheus, Training and inferencing framework) Slicing in each tenant (QoS On per Slice basis, VLAN networks for slices, VNFs/CNFs/VMs/PODs on per slice basis or slice configuration facility on shared VNFs/CNFs) Multi Site Scheduler (ONAP4K8S) (Auto Edge registration, Workload placement, On-demand tenant/slice creation) Service Mesh for Micro-services (Acceleration using Cilium’ Kernel bypass among service mesh side cars - e.g. Envoys; and others) Programmable CNI (to allow SFC and avoid multiple protocol layers) Security Orchestration (Key orchestration for securing private keys of CA and user certificates) Prove with either test cases or use cases

11 End2End Solution Multi-Edge and Multi-Cloud BP
App Repository Site Info, OS images Multi Cluster Orchestration – Workloads, Traffic and Security Global Infrastructure Controller Edge Platform (K8S based) Infra local Edge Platform (K8S based) Infra local Edge Platform (K8S based) Infra local Hardware (small) Hardware (Medium/Big) VMs (AWS, Azure, GCP) Multi-Edge and Multi-Cloud BP

12 Managed SDWAN use case OSS/BSS Service Orchestrator (ONAP)
View in slide show mode Managed SDWAN use case Subscriber Service Orchestrator (ONAP) Operator OSS/BSS Cloud Operator slice Subscriber slice SDWAN VNF Security VNF WAN Opt CNF Network Edge Advanced network services (e.g. DLP) AR/VR Services Media Analytics MEC Services Training ISP A Tunnel ISP X Security VNF WAN Opt CNF SDWAN VNF SDWAN VNF WAN Opt CNF Security VNF Operator slice ISP B ISP Y Operator slice CoSP / Internet Subscriber Edge1 Subscriber Edge2 (Dynamic) May be tested with dummy apps and OpenWRT, if no real apps from community Kubernetes

13 How does NFV based deployment with Cloud-native applications look like (Taking SDWAN with security NFs as an example) What it proves Coexistence of VNFs, CNFs and Micro-services K8S Cluster Corp networks Advanced networking support K8S Master Soft-Multitenancy resident 1 Applications (Micro-Services) Multi-Cluster scheduler via ONAP POD POD POD M1 Ingress (L7 LB) BGP LB Ingress (L7 LB) BGP LB BGP LB Service mesh for Micro-services Ingress (L7 LB) resident 2 Applications (Micro-Services) Programmable CNI M2 POD POD POD Internet Default Virtual network (Flannel) EXT Router M3 Firewall VNF (with BGP router) IPS/WAF CNF Firewall VNF (with BGP router) IPS/WAF CNF SDWAN VNF All share one virtual network CNIs – Flannel, Calico, Weave, OVN, Cilium etc… Provider Network 2 (OVN) Provider network 1 (OVN using L2 breakout, OVN LB on L2 Switch) Virtual Network1 (OVN with LB) Virtual Network2 (OVN with LB) Hardware (Multiple Nodes) Initial testing to be done using Ubuntu for all VNFs/CNFs. Mx Desktop/laptop/servers

14 View in slide show mode Distributed Analytics as a Service (Each site to have self contained inferencing and few sites with training) training) Operator Subscriber Service Orchestrator (ONAP) Onboard Analytics framework (With 6 bundles) in catalog Training App Training Model Repo Collection & Distribution Inferencing Kafka broker Activate framework Big Edge 3. Onboard an analytics app Inferencing app Inferencing app 4. Activate analytics app Federated Learning (future) Kubernetes Edge1 Edge2

15 Service Orchestrator (ONAP)
EdgeXFoundry use case Operator Service Orchestrator (ONAP) Additional Services Machine Learning Security UI Logging Graphs Kubernetes CORE SERVICES Simple Rules Engine Databases Security Kubernetes CORE SERVICES Simple Rules Engine Security

16 VR 360 streaming– Enable remote users to view the events/games via Edge-computing
View in slide show mode In same site Onboard App (With 6 services) in catalog with deployment intent Within 20msec of users Within 10msec of ONAP App Active App (when event starts) Stitch Stadium Slice VR provider1 slice Tiling Streamer Distributor Streamer Streamer 3. New users join the event ( Auto bring up of services at Edge1 and Edge3) Edge3 Contextual Video edit 4. Users join in a geo that requires additional context to be added to the stream. Streamer Edge2 5. More users join near edge1 Streamer Streamer 6. Users disappear Edge2 Streamer Edge1 Decode and Playout AVX512 Network Edges (Near Viewers) Home/remote Viewers On-prem Edge

17 What is ICN? Opinionated Stack that satisfy good number of use cases
View in slide show mode What is ICN? Opinionated Stack that satisfy good number of use cases Cloud Native Stack for both Network functions and Applications Supports only Intel HW platforms with possible non-Intel peripherals Optimized stack (leverage HW accelerators, Nis) Generic platform and Application common services (New to ICN- Consider in future releases) Multi-Cloud & Multi-Edge & Multi-Party Orchestrtion Fix the gaps to make consumable/deployable solution

18 Edge Stack (ICN) – Big picture
Apps, VNFs, CNFs Developer applications Tools to developers to optimize and convert for Edges (e.g DPDK, OneAPI) Multi Cluster Orchestration MC Orchestrator (ONAP4K8S) MC Security Controller MC Service Controller Site level Orchestration Multi-tenant Kubernetes QoS based Admission control Augment K8S for all deployment types Edge Value added services Geo Distributed DBaaS Geo Distributed Vault Distributed Security & Network functions Distributed Data & Analytics platform Monitoring Platform (fluentd, Elasticsearch, Kiana) Platform Services Accelerator Plugins QAT, SRIOV Telemetry Collection Collectd , Prometheus 5G CBRS Stack NFD OVN4K8S Optimized Service mesh ISTIO/Envoy Virtualization & Container Run time Virtualization of Local accelerators Remote Accelerators Docker Virtlet Kata Multus OVN Flannel Operating system Ubuntu CentOS Clear Hardware platform SmartNIC Inline Crypto Acceleration Autonomous Media Acceleration Elastic Power Management RDT Infrastructure Orchestration (Metal3, Ironic, BPA) New feature project Existing Open source, Major enhancement, work with upstream Enhancement in ICN project

19 Cloud Native App & NFV Stack - BICN
Value Added Services Sample Apps VPP, DPDK, AF_XDP based NFs/apps Bold & Italic – Intel led initiatives SDWAN + Security NFs Streaming EdgeXFoundry Upstream communities: ONAP, OPNFV, Many CNCF projects, EdgeXFoundry, FD.IO, DPDK, Linux, Openstack Ironic, OVS, Many ASF projects, OpenNESS, Intel Open Source PaaS (MEX) ONAP-for-K8S Analytics framework Data Lake Central/Regional location Tenant Mgr Edge Label Mgr Slice Mgr Multi-Site scheduler K8S HPA Training Centralized Infra Controller (Leverage Cluster API + Workflow manager such as Argo/Tekton) MC – K8S Plugin Service (Instantiation, Day0, Day2 config) Model Repo Messaging Kubernetes K8S App Components Infrastructure Provisioning & Configuration ISTIO MetalLB NFV Specific components KUD gVisor kata Flannel,OVN Multus SRIOVNIC OVN4K8SNFV CollectD Prometheus Affinity Hugepg mgr SFC Mgr Virtlet, Kubevirt OpenNESS NFD OVN4SFC knative Numa Mgr Ceph/ Rook QAT NTWRK/ Route Mgr Inferencing OVSDPDK Edge location Logging Host Operating System Ironic with Metal3(or equivalent) for bare-metal provisioning Tuned for eBPF and XDP at vEth Ubuntu (start with this) RH Clear Hardware S1 S2 S3

20 BACKUP

21 Edge Computing – Cloud gaming app example
Service Orchestrator Cloud/DC Edges 1000s to Millions (Millions In case of client edges) P1 - Edge1 P1 - Edge2 P2 - Edge 3 Pm Edge N 2 5 GPU As a Service Render Render Encode Encode Game Selection Service Game Simulation Engine Streamer Streamer Game Selection Slice bring up and App bring up New player joins up, expand slide & add new micro-services Game interaction Distributed Rendering Playout 4 1 PC Client GPU Offload 6 Render 3 Decode, blend & Playout Game interaction Select Game Px Edge Provider x Thin Client Render Edge Playout Bring up services with the help of service orchestrator (e.g ONAP) Slice Game interaction Select Game Game Services

22 Thank you

23 Blueprint Proposal: Multi-server Cloud native stack
Case Attributes Description Informational Type Blueprint Blueprint Family - Proposed Name Integrated Cloud Native NFV stack Use Case SDWAN, Customer Edge, Edge Clouds – deploy VNFs/CNFs and applications as micro-services Blueprint proposed Name Multi-server Cloud Native stack Initial POD Cost (capex) 50K minimum Scale & Type Minimum of 4 Xeon Servers + 1 Xeon server as bootstrap node. Applications SDWAN, ML/DL Analytics, EdgeXFoundry and 360 degree Video streaming Power Restrictions orchestration Bare Metal Provisioning : ironic with metal3.io Kubernetes provisioning : KuD. Centralized provisioning : Cluster-API + Provisioning controller (Explore Regional controller) Docker for containers and Virtlet for VMs Service Orchestration : ONAP4K8S MEC framework: OpenNESS Site orchestrator : Kubernetes upstream Traffic Orchestration within a cluster: ISTIO Traffic orchestration with external entities : ISTIO-ingress Knative for function orchestration Expose HW accelerators Storage Orchestration: Ceph with Rook Future: AF-XDP for packet processing based containers, OpenShift for site orchestrator, Kubevirt for VMs. SDN OVN, SRIOV, Flannel Workload Type Containers, VMs and functions Additional Details Future: eBPF based CNI (such as PolyCube)

24 Assessment Criteria Criteria Multi-Server Edge NFV/APP stack Criteria
Name of the project is appropriate(no trademark issues etc.); Proposed repository name is all lower-case without any special characters. Integrated Cloud Native NFV & App stack Project contact name, company, and are defined and documents Identified. Will be documenting them in the page Description of the project goal and its purpose are defined. Developing an integrated cloud native stack solution for VNFs, CNFs Scope and project plan are well defined. For release 2. Yes. Various milestones with the release are identified. Resource committed and available There is a team, resources and lab in place. Contributors identified Verizon, Intel, MobileEdgeX, Aarna Networks, VMWare Initial list of committers identified (elected/proposed by initial contributors) Yes. PTL and 4 Engineers are identified. Meets Akraino TSC policies The project will operate in a transparent, open, collaborative, and ethical manner at all the times. Proposal has been socialized with potentially interested or affected projects and/or parties Have already reached a consensus with sponsors Cross Project Dependencies. CNCFs Projecs, K8s, CRI, OCI, Virtlet, Kubevirt, Kata container, gVisior, OpenWRT, Docker, OVN, OVS, DPDK, AF_XDP, ONAP Assessment Criteria Criteria Multi-Server Edge NFV/APP stack Each initial blueprint is encouraged to take on at least two committers from different companies Verizon, Intel, MobileEdgeX, Aarna Networks, VMWare Complete all templates outlined in this documents Detailed in this slide A lab with exact configuration required by the blueprint to connect with Akraino CI and demonstrate CD. User should demonstrate either an existing lab or the funding and commitment to build the needed configuration. Continuous Deployment will be provided in Intel community lab (Similar to the way we did for OPNFV) Blueprint is aligned with the Akraino Edge Stack Charter All opensource, Edge use case, Aligned with the Akraino Charter Blueprint code that will be developed and used with Akraino repository should use only open source software components either from upstream or Akraino projects. Yes, all open source. Actual applications (use cases) in some cases are from third party For new blueprints submission, the submitter should review existing blueprints and ensure it is not a duplicate blueprint and explain how the submission differs. The functional fit of an existing blueprint for a use case does not prevent an additional blueprint being submitted. An comprehensive platform with all kinds of deployment types – VNFs, CNFs, Micro-Services and functions, Advanced networking support, Multi-tenancy, slicing, OVN based data interfaces


Download ppt "Integrated Cloud Native NFV & App stack – Proposals Intel Contact: Srinivasa.r.addepalli@intel.com Contributors: Verizon, VMWare, MobileEdgex, Dell, ONAP."

Similar presentations


Ads by Google