Presentation is loading. Please wait.

Presentation is loading. Please wait.

Going from OpenShift PoC to Production

Similar presentations


Presentation on theme: "Going from OpenShift PoC to Production"— Presentation transcript:

1 Going from OpenShift PoC to Production
Accelerate your path with HPE Red Hat Summit | May 2018

2 Presenters KA WAI LEUNG MICHAEL MATTSSON HPE Solutions
Product Management MICHAEL MATTSSON HPE Nimble Storage Product Management

3 Agenda Bringing containers to production—different adoption paths
Impact on people, process, and governance Technology considerations (including data management and protection) HPE Pointnext Services for OpenShift Planning for success

4 OpenShift adoption options
The goal of the presentation is to talk about some of the issues that customers will experience as they shift from OpenShift PoC to production. Then we’ll talk about how you can use HPE technology to accelerate that journey.

5 Four options for container adoption
1 Deploy containerized commercial apps Up in days, not months; verified and secure 2 Containerize monoliths Migrate to hybrid cloud or bare metal; get better CAPEX/OPEX versus VM This slide talks about 4 potential paths/approaches customers can take on their container adoption journey. Complexity will vary depending on the approach. 3 Containerize monolith; transform to microservices Look for shared services to transform agility, DevOps, distributed architecture 4 Enable new microservices and apps Greenfield cloud native or containers as a service (CaaS)

6 Moving from PoC to production
Key considerations People and organization Dev and release process Governance Technology and platform But now, we run into a tooling and process problem: most organizations that are over five years old have ingrained habits, tools (Chef, Puppet, CFEngine, Red Hat Satellite, several VMware products, etc.), training and processes (ITIL anyone?) specifically designed to manage virtual machines. Trying to force-fit VM-centric management tools and processes to containers can result in a very large and frustrating bag of hurt. Network consideration – like opening out and inbound ports, security clearance. Security audits might be many weeks. Ops guys might ask for run books and how detailed operation will work. App stack complex – jboss, postgres, middleware stack many components. How to do change management without any system take down. Do we take the PoC learnings and app into production or? Goal is to turn into repeatable and optimize process with future projects Governance:  Includes a global assessment of the organization and planning of the service rollout, service operating model, service level agreements, support, licensing, chargeback, training and enablement of the different teams, internal marketing of the service. Application dev and rel process:  For each application assessment of its technology stack and architecture, application operating model and deployment pattern, development team training and best practices, containerize and operationalize the stack components, automated building, testing, and acceptance of the application. Platform:  Requirements and planning, global architecture, platform operating model, integration with existing infrastructure and IT systems, operationalize for high availability and disaster recovery, testing and acceptance of the platform. Complete PoC Often a minimal viable product (MVP) Production

7 People, organization, governance

8 People and organization
Traditional waterfall development model 1-3 releases/year 4-12 month cycles Agile and DevOps model 4-12+ releases/year 1-3 month cycles Customer or BU Dev & QA IT Ops The two teams are collaborating through OpenShift lifecycle that provides a separation of concerns as a contract, while greatly improving the two teams’ collaboration in the application lifecycle. The developers own the container contents, its operating environment, and the container interdependencies, whereas the operations teams take the built images along with the manifest and run them in their orchestration system. Ops model – who is doing support, training Also good time to consider CAPEX vs OPEX model during this transition, which will have people and organizational impacts CAPEX vs OPEX model? Integrated teams

9 Dev/release process and governance
Command and control Integrated and empowered Request for change, change control board Change record as part of CI/CD pipeline Governance about who owns what and who is responsible for what. Updates and enhancements to make in your dev to ops process. Governance – different orgs touches registry contents. Figure out which apps to take to production. Key thing is figure out ops model early: What is the SLA? Who owns what? Who is doing support? Who is making changes on platform? What is the promotion policy from dev to production? For example, with no ops model for images, say there is a vulnerability on a base image. What is the update policy to these base images? *Expect to create, architect, and maintain at least as much test code and automation scripts as you create production code. **One of the most important cultural changes for aligning the work across the organization is having all the teams continually integrating and testing their code in a production-like environment. Dev controls app stack Dev controls app image, Ops controls standardized base image via catalogs Waterfall model Continuous delivery model Ops owns security and monitoring Dev assumes more control on security and apps performance monitoring

10 Technology considerations

11 OpenShift in production
High availability Lifecycle management The OpenShift production ecosystem Monitoring What types of storage to use, cifs, nfs, OO Config management SSO how to handle secrets Orchestration Scaling Security Resource management Data protection and management

12 Technology considerations
Security Securing the stack Container registry Red Hat® Linux® Hardware firmware and BIOS Container images Safe images: Security for private registry (scanning, access control) Run-time protection and continuous monitoring OpenSCAP scanning (integrate into CI/CD) Harden OS (SELinux mandatory for OpenShift) Detailed audit trail for compliance, regulation, and forensics Leverage security context constraints (SCC) Safeguarding sensitive data Strong remediation and alerting Kubernetes security, like monitoring or CI/CD, is becoming a must as a consequence of this platform quickly gaining reputation as the defacto standard for modern containerized deployments. OpenSCAP verifies the presence of patches by using content produced by the Red Hat Security Response Team (SRT), checks system security configuration settings and examines systems for signs of compromise by using rules based on standards/specifications. To effectively use OpenSCAP, there are two requirements: A tool to verify a system confirms to a standard RHN Satellite Server has integrated OpenSCAP as an auditing feature from version 5.5. It allows you to schedule and view compliance scans for the system through the web interface. Unfortunately, traditional security processes need to be updated for Kubernetes; legacy security tools that were not built for containers--cannot look inside, analyze or protect containers, microservices and cloud-native apps. Containers are like black boxes, useful for moving applications from development into production. They provide a great level of portability and isolation. But it’s harder to understand what’s happening inside—to monitor and secure them. Microservices help to develop applications faster, and orchestration tools like Kubernetes allow to deploy and dynamically scale the apps. But now we have different pieces moving around, therefore we must take a more dynamic security approach. Furthermore, many organizations are adopting security best practices and implementing DevSecOps processes, where everyone is responsible for security, and security is implemented from early development stages into production through the entire software supply chain, this also known as Continuous Security. OpenShift provides security context constraints (SCC) that control the actions that a pod can perform and what it can access. In short, when we execute a container, we want to guarantee that the capabilities required by that container to run are satisfied, while at the same time we also want OpenShift to be a secure container application platform. For this reason, we can not allow any container to get access to unnecessary capabilities or to run in an unsecure way (e.g., privileged or as root). A best practice for application security is to integrate automated security testing directly into build or CI/CD tooling. Add scanners for real-time checking against known vulnerabilities like Aqua Security, Black Duck, JFrog, and Twistlock. Tools like these catalog open source packages in your container, notify you of any known vulnerabilities, and update when new vulnerabilities are discovered in previously scanned packages. CI/CD process should include policies that automatically open issues when security vulnerabilities are discovered in the build process so the DevOps team can take immediate action to resolve problems. Finally, there is an effort underway by Red Hat, Google, and others to standardize auditing and policy enforcement with Kubernetes. Talk about HPE value-add in this area, WASL, Gen 10, HPE storage security, sysdig alliance. Lack of education/training for those involved in software development

13 Technology considerations
Monitoring Top five layers to monitor Services Kubernetes deployment Kubernetes internals Host nodes Application Host, container, and application monitoring Software as a service (SaaS) versus an on-premises monitoring approach Root cause analysis and remediation Open source versus pay-for products Data store for trending and archival analysis Monitoring tools: CloudForms, SysDig, Datadog, CoScale, Prometheus/Grafana Canned metrics and dashboards App metric – performance stats, request counts, request time, request errors. Services – are they working correctly – middleware, database backend Kub doing its job - Enough pods/containers running for each app? ● Are pods available? Liveness and readiness probes ● Are deployments stuck on a rolling update? ● Scheduled :: Running :: Available :: Desired :: MaxUnavailable Is Kub internal healthy Internal services: kubelet, API server, dockerd, etcd, etc ● Are nodes ready? or unschedulable? ● Available resources? ● Events: CrashLoopBackOff, Image pull errors, OOM evictions, etc. How are my physical resources doing? (HW, network, storage) Are the nodes up and resources available? ● CPU, memory, disk, network recommend commercial monitor for more complex environments, typically where DevOps or Platform Ops is responsible for software in production; where there is a mix of custom metric types; there is a desire for troubleshooting as well as monitoring data; and there are requirements around long-term data retention, management, and user access controls. (Source: SysDig)

14 Technology considerations
Resource management Developers are not good at sizing estimates Tendency to overcommit resources Overprovision for “safety” Leads to inefficient CPU and memory usage Magnified exponentially with thousands of pods Analyser tools: cAdvisor, Prometheus/Grafana, Densify, Turbonomics

15 PoC to production configuration considerations
PoC, Dev/QA, or SMB deployment Mid-range production configuration Enterprise production starter configuration (bare metal) Deployment scenario All services, masters, workers on VM (with persistent storage supported), HA supported VM or bare metal workers with persistent storage All services, masters, workers on bare metal Total physical nodes 3 nodes 6+ nodes 8+ nodes Number of instances All on VMs: 3 master 3 etcd 3 infrastructure 2 high availability (HA) proxy 3 workers 3 masters/etcd, infrastructure, HA proxy on VMs over 3 physical nodes 3+ physical nodes for N number workers on VMs or bare metal 3 nodes—3 master, 3 etcd 2 nodes—infrastructure, HA load balancer, and HA registry management tools, such as Ansible Tower 3+ nodes—k8s workers on bare metal Key SW OpenShift Red Hat Hyperconverged Infrastructure (RHHI) OpenShift, RHV + external storage array Or OpenShift, RHHI (for SW defined storage) Monitoring, logging, billing apps Persistent storage plugin

16 Accelerate OpenShift adoption with HPE
Reference architectures HPE OpenShift solutions (Services component, ecosystem, deployment guide, and automation) Consistent platform from DEV to OPs Development Production Deployment scale Accelerate developer productivity Simplify the IT experience Operations optimized

17 HPE Composable Systems: the ideal container platform
Solution for enterprise scale container deployment Deploy containers at cloud-like speed Improve application time to value Flex container resources up and down Efficient resource allocation by business demands Centralize container life cycle management Reduce updates from hours to minutes Advanced container data management Data protection and storage efficiency for containers HPE Synergy and 3PAR/Nimble

18 Data management and protection

19 Use cases for persistent storage with Red Hat OpenShift
Jenkins, Microsoft® VSTS, CircleCI Release more, faster, and better DevOps CI/CD pipelines Lift and shift LAMP apps, ERP systems From VMs or bare-metal Build Ship Run Atlassian Tools, ELK stack, LAMP apps Simplified security—easy to manage IT operations CaaS Self-service for developers Secure and predictable ABC XYZ Apps

20 Hardware versus software
OpenShift Kernel App VFS Kernel VFS SDS OpenShift App Capability External storage Software-defined storage Consistency model Synchronous Eventually consistent/tunable Data services Snapshot, clone, async/sync replication Varies Performance Sized to workload Limited – server bound Storage reduction Dedupe, compress, thin Requires multiple copies (replicas) Scale and grow As needed Need storage – add compute Efficiency Data processed externally Compromised app latency CAPEX/OPEX/TCO High / Low / Low Low to Extremely High / High / High Protocol FC / iSCSI / NFS Container only, block, object, NFS Security Granular encryption Backup, recovery, archive Strong, built-in Weak – varies, high impact RTO Reliability, availability, serviceability Unmatched – fully integrated Questionable Cloud native Storage-as-a-Service Self-hosted Replication: Spray and pray dm-crypt

21 Solution: HPE Persistent Storage platform for containers
Speed up DevOps Self-service automation rich container platform integration Comprehensive REST APIs plug into Ansible, Puppet, Chef Lift and shift data with applications Multicloud onramp for data using HPE Cloud Volumes Onboard data easily by instantly converting legacy volumes to persistent volumes HPE Cloud Volumes Simplify container operations Container QoS, security IOPS, encryption Container data protection: clean up and retention for snaps and clones Simple, fast, efficient: predictive flash for six-nines availability, support Self-service automation rich container platform integration Comprehensive REST APIs plugin to Ansible, Puppet, Chef

22 HPE Persistent Storage platform for Red Hat OpenShift
OpenShift Container Platform 3.5 to 3.9 OpenShift Origin FlexVolume Driver FlexVolume plugin Provisioner Open APIs + HPE Storage open-source software* Docker Volume API HPE Docker Volume plugins Plugin Unix Socket HPE Cloud Volumes 3PAR Nimble Storage Coming soon: HPE Cloud Volumes *

23 HPE Nimble Kube Storage Controller overview
Features Lifecycle Highly-available, volume scoping, user-defined descriptions, control remove and detach behavior. Performance Controls Performance Polices QoS Limits – IOPS and Throughput Volume Placement Pools and Folders Protection Templates Snapshot schedules and retention Array-to-array and HPE Cloud Volumes Security Encrypt data at rest Set mount point UNIX permissions Provisioning Specify thin or thick provisioning Up to 127TB Volumes – default size 10GB Dedupe & Compression Variable block size Zero-Copy Clones Reuse data from production containers Volume Import Seamless data migration Clone Nimble volume in a Docker Volume Parameters description: "My Description" destroyOnRm: "true" --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-storage-class provisioner: hpe.com/nimble parameters: description: "My Description" encryption: "true" limitIOPS: "1000" perfPolicy: "My Policy" protectionTemplate: "my-prot-1" kind: PersistentVolumeClaim apiVersion: v1 name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Gi storageClassName: my-storage-class perfPolicy: "SQL Server" limitIOPS: "32000" limitMBPS: "512" pool: "allflash" folder: "My Tenant" protectionTemplate: "local-cloud" encryption: "true" fsOwner: "8192:500" fsMode: "0755" thick: "true" sizeInGiB: "4000" dedupe: "true" cloneOf: "MyDockerVol1" snapshot: "MySnapshot" createSnapshot: "true" importVol: "MyNimbleVol1" importVolAsClone: "MyNimbleVol1" snapshot: "MySnapshot" Legacy Docker

24 HPE Pointnext Services for OpenShift

25 OpenShift container service considerations
Container and cloud adoption is not trivial Overall business objectives Determine application migration strategy Review networking, security, and storage requirements Define system architecture Define and implement PoC How best to containerize app Dev Build and test Package and archive Release and deploy Continuous integration and deployment pipeline Deployment Discovery Design Pilot

26 Announcing HPE cloud native container service for OpenShift
Review application requirements 2–3 day workshop to gather requirements and define integrations Create design Deploy container platform environment Pilot containerized applications Move to production The service typically flows in three steps, the first being discovery. In order to help your developers, test teams, and operations achieve peak productivity, we need to take the application, technical, business, and operational requirements into careful consideration. Discovery answer questions such as what systems are integrated, how your team will deploy containerized applications, and how do we secure and scale to meet business needs The design phase makes sure that the requirements are realized in the subsequent deployment. Once your environment is deployed and configured, we execute preproduction tests and help you work through onboarding cloud-native containers into your deployment Discovery Design Deployment Test and evaluate Pilot or trial workload Production

27 Plan for success

28 Move from PoC to production—Key success factors
Implement best practices and address issues/learnings from PoC (people, process, technology) Have a complete OpenShift container ecosystem in place: HA, security, monitoring, data management, etc. Determine CAPEX vs OPEX; plan whether to do it yourself or partner Accelerate this path with HPE + Red Hat

29 Resources and key contacts
Reference configuration Reference configuration for Red Hat OpenShift Container Platform on HPE Synergy Composable Infrastructure hpe.com/V2/GetDocument.aspx?docname=a enw Video: hpedemoportal.ext.hpe.com/search/Automated deployment of Red Hat OpenShift on HPE Synergy HPE platform hpe.com/info/composableprogram hpe.com/us/en/storage/containers.html Red Hat OpenShift Container Platform datasheet redhat.com/en/resources/openshift-container- platform-datasheet GitHub repositories github.com/RHsyseng/ocp-on-synergy github.com/HewlettPackard/image-streamer- reference-architectures/ tree/master/RC-RHEL-OpenShift HPE contacts Ka Wai Leung Containers Solutions Product Management Gary Lee Harris Pointnext Container Consulting Michael Mattsson HPE Storage Tech Marketing Bob Zepf HPE Strategic Alliances

30 Thank you


Download ppt "Going from OpenShift PoC to Production"

Similar presentations


Ads by Google