5Some of our Challenges Seamless On-Demand Infrastructure Capacity Do we really want those hundred tickets to deploy a service ?Drive developer agilityProvide self-service tool for application life cycle mgmtProvide a platform to enable faster innovation.
7Openstack is the winner Solves Infrastructure-as-a-ServiceIts open sourceNo specific vendor lock-insFast growing developer communityOpen standards and api drivenIndustry best practices, prevent reinventing the wheel
9Our Technology stack Orchestration Engine User InterfaceOperations Portal Asgard, Horizon, CeilometerPD Deployment PortalTraffic MgmtMonitoringMeteringStagesWorkflowMonitoringOrchestrationOrchestration EngineCloud Formation (Heat)Foundational ServicesNova, Cinder, Swift, Keystone, Neutron, HorizonLBaaS, DNSaaS FWaaSSoftwareInfrastructureCobblerISC DHCPSaltBindRHEL 6.xHypervisorZabbixTwo Entry Points for InfrastructurePayPal Product DevelopersCloud Operators to manage CloudCentrally Orchestrated using HeatLocal StorageHP 4X600 GB(MirrorCisco 4948 & Arista 7050 Nicira NVPF LB----- Meeting Notes (10/25/13 12:12) ------ take horizon out- replace asgard with auroraHardwareInfrastructurex86 ComputeLocal StorageNetworkLoad BalancerPP Specific
10TUNING nova for High Availability Scheduling enhancements for failure and availability domainsCustom PayPal filter schedulerTenant based Compute Zone filters with FolsomHost Aggregate filtering in Grizzly25% distribution among different fault zone for HAA Rack of Servers is an important entity - Defines Fault Zone (Availability Zone)1. Use Host Aggregates to define availability zone for all hosts in a half rack.2. Use Host Aggregates for Front and Mid Tier (production) & Per Requirement Basis and then map tenants to these HAs.Its Tenant Based - Production requires Special Tenants to have their VMs landing on Specific Computes- In Grizzly- Modified HAs - Added a New Table for Tenant – HA mapping- In Folsom- Had concept of Compute Zones- Compute Zone could have hosts from different availability zones (fault zones)- Our Own filter which is a compute zone filter- Reserved compute zone capability – To make sure a host is dedicated to owner of the Compute Zone. And no one else lands on it.25% Availability Zone Distribution – Basic concept being equal distribution of VMs for High Availability reasons.- Custom PayPal filter scheduler- Calculate VMs per availability zone aggregation for the tenant requesting the instances – This information is used for 25% availability zone filtering- ‘Weigh filter’ help filter by availability zone fullness.
11NOVA changes Instance host naming uniqueness Auto assigning floating IPs to VMsRack aware networkingLeveraging config-driveNova conductor - security vs. load on rabbit- Instance Host naming (Also helps meet some of our OPS Tools requirements):- template based and its configurable per tenant.- nova api level host name validation logic for non standard characters.- Auto assign floating IP- plugged into nova during instance launch time.- nova orchestrates quantum apis call to allocate and assign flip to instance.Auto assigning quantum floating IPs to VMs at launch time, for external connectivity in required environments- Rack aware networking (in Grizzly) for selecting correct Neutron network to allocate IPs from, for launched instances - Bridged vs. Overlay networks- Leveraging config-drive to store cell specific configurations, device type labels etc.- Nova conductor services - security vs. load on rabbit in a large deployments
12Keystone Changes Integrating keystone with LDAP Auto tenancy feature Tenant based hostnames & dns zonesClient side token cachingTeam admin featureKeystone integration with AD and OPEN LDAP for easier authentication of all internal users- Auto tenancy (for specific clouds) he can start using the cloud !- tenant name is assigned as username- default member or team admin role ownership is created for the user to this tenant- Tenant Metadata- Extras field is being used to save key value pairs- Concept similar to host aggregates where tenants are tagged with key value pairs.- Horizon has been added with new features to allow users to select from the DNS list for their tenant during instance launch time.- Client token caching- Quantum client calls made by nova create a lot of keystone tokens.- Caching tokens at client side and reusing them helps reduce the total number of tokens stored with keystone- Speeds ups keystone performance.- Team Admin (was supposed to be implemented using Domains concept in v3 keystone apis)- you don’t need OS admin to handle the tenant, you can be team admin of few tenants- new user role = team admin (was supposed to be implemented using Domains concept in v3 keystone apis)- can be configured with team_admin_roles = Member, which is roles with which normal users will be added/removed to tenants by team_admins.- helpful in listing roles of corporate user and tenants- All these features are configurable
13DNS-as-a-service integration AutomaticProject based zonesFloating IPs- Allow each instance to have unique IP-FQDN bindings registered in production DNS- REST API driven and integrated into nova - allocation and deallocation of entries handled during instance creation and deletion time- Tenant based DNS zoning feature leverages tenant metadata to support different zones per tenant, on a need basis.- DNS support extended to Quantum floating IPs as well
14LOAD Balancer-AS-a-service Registration and auto discoveryRich tenant and operator facing apisPropagating changes to multiple LBsChange Management IntegrationMain thing about this REST API Driven and 2. Tenant based segregation- Registration and discovery of physical load balancers.- Management of- vips,- pools,- monitors,- i7 rules,- ssl certs and- services through GUI, PAAS and HEAT- Devices are not exposed to cloud users but visible to operators- Operator facing APIs for- managing devices,- config back up/restore,- config sync across primary and secondary LBs- Granular job status, failed jobs re-submit, 100% async, pre & post validation
16User experience Ease of use Adoption Multi Version Multi Region Velocity use case in Asgard itselfcell deployment withcentralized LDAP loginManaging different releases of OpenStack with simple jsonconfig changeOptions to pick & choosenvd3 based Graphs and bootstrap based GUIEasy install