Presentation is loading. Please wait.

Presentation is loading. Please wait.

StarlingX Project Overview

Similar presentations


Presentation on theme: "StarlingX Project Overview"— Presentation transcript:

1 StarlingX Project Overview
OSF Community Webinar Bruce e. Jones, Intel Ian Jolliffe, Wind River

2 Let Me Introduce StarlingX
New, top-level OpenStack Foundation pilot project Software stack providing high performance, low latency, and high availability for Edge Cloud applications The first release is coming out this week! Growing community Inviting users, operators and developers to try out the software and participate in the community We are very pleased to have the opportunity to talk with you today about the StarlingX project. StarlingX is an OpenStack Foundation Pilot Project. The first community release is (was) October The code is being released under the Apache 2 license. We are inviting you to try out the software, join the community and contribute to the project.

3 Project Overview

4 What Is Driving Edge Computing?
~100ms ~10-40ms < 1-2ms < 5ms For over a decade, centralized cloud computing has been a standard software delivery platform. While cloud computing remains ubiquitous, emerging requirements and workloads are exposing some limitations. The standard Cloud data center model works extremely well for millions of applications, but there are other applications which can greatly benefit by moving computing resources out of the datacenter and closer to the Edge of the network. There are many definitions about where exactly the Edge of the network is, and they seem to depend on what use cases and devices are being discussed. For the purposes of today’s discussion consider the Edge to be somewhere between the Internet’s Core routers and the device you have in your hand. For more information you can contact the OpenStack Edge Computing working group Edge Computing is being driven by the opportunities to optimize applications deployed at the Edge to take advantage of lower network latencies and “closer to the user” geographic locations. There are many use cases that can benefit from these optimizations. Latency Bandwidth Security Connectivity “WHERE” MATTERS Source: Cloud Edge Computing: Beyond the Data Center

5 Edge Computing Use Cases
vRAN == virtual Radio Access Network MEC == Multi-access Edge Computing Latency is one of the primary motivators of the move toward Edge Computing, and for Telcos deploying a virtualized Radio Access Network, it has already proven very worthwhile to move some compute to Edge systems by running virtualized baseband functions on commodity server hardware. Health care use cases like imaging, diagnostics and basic monitoring functions can benefit from local compute resources, which also help meet government regulations for data locality and privacy. And there are new use cases emerging for Multi-access Edge Computing, to help reduce application latency for virtual reality and augmented reality where high latency can ruin the experience

6 Edge Computing Use Cases
uCPE == universal Customer Premises Equipment Here are some other use cases that can take advantage of the built-in security, high availability, fault repairing, low latency that an Edge optimized software stack like StarlingX offers. Mass transit, both for their increasing sensor driven information gathering and as they move to autonomous vehicles in smart cities. With self driving vehicles generating TB’s of data and the need to survive network outages, local compute and storage resources are critical Industrial automation’s growing use of robotics and sensors in all areas, placing Edge nodes on the factory floor can improve manufacturing speed and accuracy. Equipment on customer premises that process and also deliver local content to end users All of these use cases can benefit from software solutions that enable widely distributed Edge Cloud network topologies, to put processing resources and storage closer to the end user and thus reduce latency and increase performance.

7 What Problems Is StarlingX Solving?
Massive data growth Moving off the centralized core cloud, new applications, services, and workloads increasingly demand a different kind of architecture, one that’s built to directly support a more broadly distributed infrastructure. New requirements for availability and cloud capability at remote sites are needed to support both today’s requirements (retail data analytics, network services) and tomorrow’s innovations (smart cities, AR/VR). The maturity, robustness, flexibility, and simplicity of cloud computing can now be be extended across many small sites and networks at the Edge of the network in order to cope with evolving demands. Source: Cloud Edge Computing: Beyond the Data Center Network needs to be smarter Distributed infrastructure demands a different architecture The maturity and robustness of Cloud is required everywhere Managing a massively distributed compute environment is hard

8 Intent of the StarlingX Project
Re-Configure Proven Cloud Technologies for Edge Compute Orchestrate system-wide Deploy and manage Edge clouds, share configurations Simplify deployment to geographically dispersed, remote Edge regions Technology companies have now spent a couple decades evolving Cloud computing to its current state of robustness. Linux blazed a trail of open source success. The OpenStack software stack showcases how some of the hardest Cloud software problems can be solved in an open community. And newer technologies have spun out of the creative energies that vibrant open source communities can foster. This is apparent where new Cloud deployments use hypervisors and virtual machines with containers under an umbrella of orchestration. StarlingX takes these successful open source Cloud components and makes them a viable platform in the somewhat different arena of the distributed Edge. Re-configuring these strengths, it must also take into account geographic dispersion, low overhead communications, and the need to manage large hardware deployments. It needs to be Cloud-like in many aspects, but it has to be simple to deploy, distribute and manage. This is the goal of the StarlingX Project. Video Healthcare Manufacturing Transportation Smart cities Retail Energy Drones *Other names and brands may be claimed as the property of others

9 StarlingX Technology

10 StarlingX - Edge Virtualization Platform
StarlingX provides a deployment-ready, scalable, highly reliable Edge infrastructure software platform Services from the StarlingX virtualization platform focus on Easy deployment Low touch manageability Rapid response to events Fast recovery Think control at the Edge, control between IoT and Cloud, control over your virtual machines. StarlingX is a complete Cloud infrastructure software stack for the Edge used by the most demanding applications in industrial IOT, telecom and other use cases. It is based on mature production software deployed in mission critical applications. We have just newly open sourced StarlingX code for Edge implementations in scalable solutions that can be productized now. It uses both OpenStack components as well as a number of proven open source building blocks to create its complete offering. Tested and released as a complete stack, StarlingX provides a deployment-ready, scalable and highly reliable software platform to build mission critical Edge clouds. Its unique project services include configuration, fault, service, software and host management to ensure high availability of user applications. The StarlingX virtualization platform is designed to build up your Edge cloud quickly, manage it easily, respond and recover extremely fast. *Other names and brands may be claimed as the property of others

11 Scalability from Small to Large
Single Server Runs all functions Dual Server Redundant design Multiple Server Fully resilient and geographically distributable The architecture is designed to scale based on the needs of the particular Edge computing environment. The same platform can run on: Single Server Running Controller, Compute, Networking and Storage functions, Supported on small hardware form factor Dual Server Each Server running Controller, Compute, Networking and Storage functions, Controller, Networking and Storage functions run HA across two servers, Multiple Server Controller, Compute and Storage functions all running on independent servers, 2x Node HA Controller Node Cluster 2-100x Compute Node Cluster OPTIONAL 2-9 Node CEPH Storage Node Cluster And L3 Networking services distributed across compute nodes.

12 Configuration Management
Manages installation Auto-discover new nodes Manage installation parameters (i.e. console, root disks) Bulk provisioning of nodes through XML file Nodal Configuration Node role, role profiles Core, memory (including huge page) assignments Network Interfaces and storage assignments Inventory Discovery CPU/cores, SMT, processors, memory, huge pages Storage, ports GPUs, storage, Crypto/compression H/W SQL DB Hardware Resources Manifests Puppet Resources REST API System Inventory (Agents) (Conductor) CLI Horizon Wizard Automation System Configuration and Setup

13 Host Management Full life-cycle management of the host
Detects and automatically handles host failures and initiates recovery Monitoring and alarms for Cluster connectivity, critical process failures Resource utilization thresholds, interface states H/W fault / sensors, host watchdog Activity progress reporting Interfaces with board management (BMC) For out of band reset Power-on/off H/W sensor monitoring Manage the host via REST API Infrastructure Orchestration Configuration Management Host Management Service Management Request H/W Inventory Manage Monitor Processes Hosts VMs Vendor Neutral Host Management

14 Software Upgrades and Patching
Software Management Hitless Migration Control Storage Compute VM . Source Cloud on Release N Source Cloud on Release N+1 Automated deploy of software updates for security and/or new functionality Integrated end-to-end rolling upgrade solution Automated, low number of steps No additional hardware required for upgrade Rolling upgrade across nodes In-service and reboot required patches supported Reboot required for kernel replacement etc. VM live migration is used for patches that require reboot Manages upgrades of all software Host OS changes New / upgraded StarlingX service software New / upgraded OpenStack software Been doing this for 5 releases incuding skipping releases Kilo->Mitaka Newton->Pike Software Upgrades and Patching

15 Current Architecture OS is based on Centos, multiple kernel configurations for performance and security Extensions/fixes to open source packages. Intent is to upstream to applicable projects, carried by project initially Titanium developed components are being contributed as seed code to StarlingX, covered in more detail during presentation Openstack based on Pike with a significant number of value added extensions for hardening, new features and performance enablement Black boxes are default services Brown boxes can be optionally deployed Intent to upstream extensions/hardening to openstack, carried by project initially -Wind River intends to upstream future Titanium cloud development to StarlingX using an upstream first model - Wind River will start pushing new content to the seed once repos become available *Other names and brands may be claimed as the property of others

16 Next Generation Container Architecture
kube-proxy kublet docker etcd kube- scheduler kube-controller- manager kube-apiserver kubectl HELM calico kube- dashboard kube-dns Container Platform StarlingX Services Software Management Linux OS Current Open Source Building Blocks Fault Management Host Management Configuration Management Service Management Pods Infrastructure Orchestration OpenStack Infrastructure docker registry StarlingX is evolving to Running OpenStack containerized On top of a bare metal Kubernetes cluster With OpenStack Helm managing the lifecycle of the OpenStack cluster With Kubernetes Cluster initially supporting Docker runtime Calico CNI plugin CEPH as persistent storage backend HELM as the package manager Local Docker Image Registry Along with Kubernetes cluster available for non-OpenStack end user applications Full Support for VMs and Containers *Other names and brands may be claimed as the property of others

17 The Road to the Edge Build it yourself from open source components
Building blocks need refinement Time consuming Gaps to fill Use StarlingX New services provide improved manageability for the platform and high availability for your applications to meet Edge Cloud requirements Tested and available as a complete stack Mission-ready for your applications Pretty much all the piece parts to build a reliable open source distributed Edge virtualization platform are available now. Putting it together is a big challenge. Maintaining and updating it is also a challenge. We have architected a complete stack, then compatibility checked and validated it. StarlingX is now available for your testing. We’d love to see how it performs for you and get your feedback!

18 Community and Contributing

19 Principles The StarlingX project follows the “four opens,”
Open Collaboration Open Design Open Development Open Source Technical decisions will be made by technical contributors and a representative Technical Steering Committee. The community is committed to diversity, openness, encouraging new contributors and leaders to rise up. The StarlingX community is proud to be an OpenStack Foundation Pilot project and fully committed to Four Opens of software development. We look for technical contributors to make all technical decisions, under the guidance of a Technical Steering Committee composed of experienced developers. We are committed to building a diverse, open and strong community.

20 Sub-project Structure
Technical Steering Committee StarlingX Main Sub-projects Config Fault HA GUI Metal NFV Update Distributed Cloud StarlingX Supporting Sub-projects Docs Build Distro: OpenStack Distro: non-OpenStack Test Security Containers Networking Releases MultiOS Python 2 —> 3 Devstack Zuul enablement Main sub-projects New functionality and services Supporting sub-projects Supporting services, test and infrastructure Sub-project team structure 1 Team Lead 1 Project Lead Core Reviewers Contributors The StarlingX project is a very large project. To help make things more manageable we have divided the work up into a number of sub-projects. Each of the new StarlingX services has it’s own sub-project team. In addition we have created a number of other sub-projects to allow us to further divide the work. Each of these teams has leaders, core reviewers and contributors who work on the sub-project and join the sub-project technical calls.

21 Governance Roles Contributor Core Reviewer
Someone who made a contribution in the past 12 months Code, test or documentation Serving in a leadership role Can run and vote for elected positions Core Reviewer Active contributors to a sub-project, appointed by fellow core reviewers Responsible for reviewing changes and specifications Can merge code and documentation changes In creating our project governance we have used concepts and ideas from several existing projects including the OpenStack projects themselves and from Kata Containers. We grant “Contributor” status to anyone who has made a Contribution to the project in the last 12 months. This status is used to determine eligibility to run for and vote in the annual project leadership elections. We grant Core Reviewer status to key technical contributors who agree to take on the responsibility of actively reviewing code and specification submissions.

22 Governance Roles Technical Lead Project Lead Per sub-project
Core Reviewer with additional duties Helps guiding the technical direction of a sub-project Project Lead Sub-project level coordination work Tracks and communicates progress and priorities Sub-project ambassador The StarlingX community has decided to split the traditional OpenStack PTL role and to define them separately. Our projects have both a Technical Lead and a Project Lead. We are lucky that we have many contributors in our community who can fill these roles and believe that splitting the PTL role allows them to be more effective leaders for the project.

23 Governance Bodies Technical Steering Committee (TSC)
Responsible for overall project architectural decisions Managing the sub-project life-cycle Making final decisions if sub-project Core Reviewers, Technical Leads or Project Leads disagree It will be comprised of 7 people, where the initial group will be appointed; the project will move to an election based system within the first year The initial TSC members are Brent Rowsell (Wind River), Ian Jolliffe (Wind River), Dean Troyer (Intel) and Saul Wold (Intel) The StarlingX technical steering committee is the overall leader for the project. The buck stops there. They provide architectural vision and direction, manage the lifecycle of the sub-projects, resolve technical decisions, and are the key technical leaders for the project.

24 Contributions Code and formal documentation are available through git / gerrit git.starlingx.io Informal documentation is also on our wiki: Bugs are tracked in Launchpad New ideas are introduced in the specs repository Design and implementation work is tracked in StoryBoard We are using git and gerrit and the standard OpenStack tools for CI/CD. We have formal documents on the web that are managed in git repos, and have informal documents on the wiki. We are using the Launchpad tool to track our bugs and the Storyboard tool for features. New features and significant development efforts are defined in a specification process and reviewed and approved in gerrit.

25 Community You do not need to be an Individual Member of the OpenStack Foundation in order to contribute, but if you want to vote in the annual OpenStack Foundation Board of Directors election, you may join: openstack.org/join If you are contributing on behalf of an employer, they will need to sign a corporate contributor license agreement, which now covers all projects hosted by the OpenStack Foundation (same model such as Apache and CNCF) Anyone can join the StarlingX project and contribute to it. You do not have to be a member of the OpenStack Foundation but you can certainly join if you’d like to. If your employer is a member, please make sure you have a corporate contributor license agreement in place.

26 Communication #starlingx@Freenode Mailing Lists: Email:
lists.starlingx.io Weekly meetings: Zoom calls You have an opportunity to get involved in several ways: Stay updated with the mailing list. We use starlingx-discuss for all s. Attend the community meetings Try out the code. Please do try out the code. And also, join us as a member company at any level

27 Thank You! Q&A

28 Appendix

29 What Problems Is StarlingX Solving?
Massive data growth Data is exploding – requires higher capacity and lower latency The network needs to be smarter – Flexible, cloud-ready and Distributed Moving off the centralized core, new applications, services, and workloads increasingly demand a different kind of architecture, one that’s built to directly support a distributed infrastructure. New requirements for availability and cloud capability at remote sites are needed to support both today’s requirements (retail data analytics, network services) and tomorrow’s innovations (smart cities, AR/VR). The maturity, robustness, flexibility, and simplicity of cloud now needs to be extended across multiple sites and networks in order to cope with evolving demands. Source: Cloud Edge Computing: Beyond the Data Center Network needs to be smarter Distributed infrastructure demands a different architecture The maturity and robustness of Cloud is required everywhere Managing a massively distributed compute environment is hard

30 Standard Configuration
Control Node External Networks Layer 2 Switch Storage Node Compute Node VM { OVS-DPDK, SRIOV } Compute Platform Layer 3 Routers Optional 2-9 Storage Nodes 2-100 Compute Nodes 1:1 HA Control Cluster StarlingX provides a bare metal deployment of OpenStack. While the platform can scale from a single server, or a redundant pair, to hundreds of servers, let’s look at an example of a standard configuration of StarlingX. The Standard Configuration consists of: 2x Node HA Controller Node Cluster 2-100x Compute Node Cluster (Note: up to 100 only with CEPH Storage Node Cluster) OPTIONAL 2-9 Node CEPH Storage Node Cluster StarlingX Service Manager provides Cluster Management for HA Control Node Cluster Neutron’s L3 Networking Services are distributed across Compute Nodes, Cinder backend can be: LVM on Controllers ( DRBD sync’d for HA ), and/or On CEPH Storage Nodes. Glance backend can be: Filesystem on Controllers ( DRBD sync’d for HA ), and/or On CEPH Storage Nodes Swift backend is only supported on CEPH Storage Nodes, CEPH Storage Nodes can be deployed in ‘replication groups’ Groups of 2x Storage Nodes, for replication factor of 2, or Groups of 3x Storage Nodes for replication factor of 3. *Other names and brands may be claimed as the property of others


Download ppt "StarlingX Project Overview"

Similar presentations


Ads by Google