Presentation is loading. Please wait.

Presentation is loading. Please wait.

Continuous Delivery on AWS

Similar presentations


Presentation on theme: "Continuous Delivery on AWS"— Presentation transcript:

1 Continuous Delivery on AWS
Stephan Stephan Hadinger Rudy Krol

2 Deployments at Amazon.com
Mean time between deployments (weekday) ~1,079 Max number of deployments in a single hour ~10,000 Mean number of hosts simultaneously receiving a deployment ~30,000 Max number of hosts simultaneously receiving a deployment Stephan

3 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Conclusion This is the plan I propose:  - An introduction to the principles of integration and continuous deployment continues I'm going to move very quickly because I know you are right on this subject at the SG CIB so I imagine that these are concepts that you know well. Yes to simplify I will only talk about "continuous deployment" in French, but when I use this term it's not fair to talk about Continuous Deployment but I also speak Continuous Delivery. We'll see the difference between the 2 right after.  - Followed by a section on different strategies for packaging and deployment that we see in our customers with the advantages and disadvantages of each strategy.? - Then I will present the various AWS services that implement continuous deployment with several approaches that meet different needs.  - And we will end with a conclusion to sum it all What I offer you is to book your questions until the end. It will allow me to secure the timing because it has a lot of content. So do not hesitate to write down your questions to ask them at the end.

4 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Conclusion It's gone, let's start with the introduction.

5 Continuous Integration
Repo Package Builder Config Push Code Config Tests Version Control CI Server This is a classic example continuous integration, where you have:  - Developers who commit code in a source repository  - These sources are recovered by a continuous integration server to compile, run unit tests, code quality reports, etc.  - If it goes wrong, such as a unit test fails, the continuous integration server sends an alert to Developer  - If all went well, then we create a application package that is stored on an artifact repository REWIND: The problem when one stops there certainly is that we have a better application, but we continue to deploy very rarely, it's still a manual operation, complex and risky and especially was not brought more values ​​to utlisateurs. We agility at the dev teams, but the challenge of continuous deployment is to extend the iterations until prod ... (back to slide 7) Commit to Git/master Get / Pull Code Dev Distributed Builds Run Tests in parallel Send Build Report to Dev Stop everything if build failed

6 What does CI give us? Test driven promotion (of development change)
Increasing velocity of feedback cycle through iterative change Bugs are detected quickly Automated testing reduces size of testing effort So why is because of the continuous integration? Before the goal is to avoid long development tunnels over several weeks or months, and realize in the end that it does not work by integrating all components of the system or that it does not meet the needs of craft. So continuous integration allows:   - Improving the quality of the software testing process with continuous   - Give feedback to developers as soon as possible, several studies have shown that over a bug is detected later, it is more expensive to correct   - To make demonstrations trades regularly to have their feedback and prioritize developments

7 Continuous Delivery/Deployment
Version Control CI Server Package Builder Deploy Server Commit to Git/master Dev Get / Pull Code AMIs Send Build Report to Dev Stop everything if build failed Distributed Builds Run Tests in parallel Staging Env Test Env Config Tests Prod Env Push Install Create Repo CloudFormation Templates for Env Generate It is an example of AWS implementation, you already see two names appear AWS services, the traditional approach is to:   - Create CloudFormation templates that will help you start the AWS resources that will form an environment   - And packager of AMIs, knowing that AMI is a virtual machine image to AWS format that will then be used to instantiate VMs EC2 And in standard impémentations continuous deployment, we have a deployment service that uses these artifacts to deploy applications to different environments. It was just a little teasing, do not worry I'll go into detail of all these elements during the presentation, there it was just an overview.

8 What does CD give us? Automated, repeatable process to push changes to production Hardens, de-risks the deployment process Immediate feedback from users Supports A/B testing or “We test customer reactions to features in production” Gives us a breadth of data points across our applications Why the ongoing implementation, Stephan in spoken earlier and I repeat, because it's still the original purpose but it is not even posted in this slide: one makes continuous deployment to deliver value as soon as possible to users, thus to improve the time to market. We were talking earlier the number of deployments at Amazon, it's not for the beauty of the gesture, it is because it is a major pillar of the Amazon strategy. Automate the deployment process is to have a repeatable process so auditable secure and reliable because it leaves no room for human error. It is also one of the major pillars of the Lean Startup, for those who know, if you can measure the effectiveness of your product you can return to a continuous improvement loop: Develop, Deploy, Measuring, Developing, and developing ... are suddenly driven by the measures so in the end the product is better because you learn from feedback.

9 Continuous Delivery versus Continuous Deployment
The difference between Continuous Delivery and Continuous Deployment is the level of automation, which is related to the level of maturity, it comes in the Continuous Deployment gradually as one gains confidence and can fully automated pipeline deployment. Potentially each commit developers may lead to a production deployment if all tests are successfully passed. In some context, it is not possible especially for complex applications with dependencies on other parts of the SI, it's obviously much more complicated to set up, rather it is therefore the Continuous Delivery, the objective is to having a deployable applciation production at any time, but the production deployment step remains manual. The interest is still to deploy whenever possible application in production for a deployment process well oiled and deliver value as soon as possible to users.

10 Example CI/CD pipeline
Dev IT Ops Version Control Build/ Compile Code Unit Test App Code DR Env Test Env Prod Env Dev Env Application Write Infrastructure CloudFormation tar, war, zip yum, rpm Deploy App Package Deploy application only Deploy infrastructure Build AMIs Validate Templates Infra Code Infras Automate Deployment Artifact Repository This is an example of deployment pipeline. A pipeline must represent your production value chain, called the Lean Value Stream Mapping. Here is a simple example, we do not see the execution of functional tests for example. It can be much more complex, with steps in sequence but also in parallel, performance testing, security, etc. I will support me on this diagram to position the various AWS tools I'll introduce you later.

11 SERVICE METRICS HOST METRICS EXTERNAL SITE METRICS LOG ANALYSIS
All this can only work if you have a good view of your surroundings. In absolute monitoring is very important for any application, but this is especially true in a continuous deployment context. At one point, when deploying every day or several times a day, it is getting important to detect abnormalities at the earliest, raise alerts and find the root causes to solve problems quickly. Some of our customers are going to set up an immune system that triggers rollbacks in case of anomaly detection following a deployment.

12 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Conclusion Now for the continued deployment strategies, returning a little more in the concrete with AWS.

13 Delivery approaches How fast do we need to do this?
Across how many instances? How do we roll back (or forward)? Before describing strategies is important to ask the right questions that make sense in your context, for example:   - How fast should we be able to deploy? That is, are we must be able to adapt quickly to peak loads, or do we have a permanent traffic and in this case it can take time to start the instances before making them available.   - On how many instances should you deploy?   - How should we manage the rollback in case of problems during deployment?

14 AMI building/deployment methods
One of the top choices in the implementation of continuous deployment on AWS is the granularity of AMIs. You have the choice between:  - Complete AMIs specific to each application, which will contain not only the OS and libraries but also the application server and the binary of the application. You will just have to apply some minor configuration at boot time of the proceedings and will be ready for use  - As opposed to the far right you see that you can also have very generic AMIs that will contain only one such head installation in addition to the OS and JDK, and start the instance we will trigger Installation recipes Chef and deploy the application  - And in the middle you have a intermediate with somewhat specialized AMIs, eg AMIs for all Java / Tomcat application, and then you go, at the start of the proceeding, install the application and some specific components to application

15 Delivery approaches Fully Functional AMI OS-Only AMI Least flexible
to maintain Try and find a happy medium here Most amount of post-boot work Shown in this diagram, you have left completely functional AMIs, and right of AMIs that contain only the OS. What you suggest is to try to go further to the left as possible, for 2 reasons:  - It allows you to start instances much faster, so for example if you know you need to handle peak loads, you can more easily scaler  - Secondly deployments are more reliable: the more you have scripts to run during deployment, the more you have the risk of error, for example if you use head and you go get artifacts on repositories etc. there are risks that these standards are not available when you need it. By cons, it requires a little work to maintain these AMIs, but you have the tools to help, I think, for example Aminator, which is an open source tool developed by Netflix to generate the AMIs from a AMIs base and reference to the packages to install. Partially Configured AMI

16 Deployment approaches
Deploy in place Deploy all at once (service outage) Rolling updates Red-Black deployment Discrete environment Multiple environments from branches Support A/B testing Use auto scaling group So we talked about the packaging strategy, discussed later in the end when we discuss the presentation containers that still allow a new way to package your applications. Now we will talk about deployment strategy. You have several approaches:   - The first is to your instances updater, or you updatez all instances of a sudden, in this case you have a indispnibiilté service which is problematic when you deploy often, users do not like. So the conventional technique to implement this is to update the proceedings one after the other, so-called "rolling updates"   - The second option is called Red-Black deployment, a derivative of Blue-Green deployment, which allows to fully exploit the capabilities of the cloud, I will describe the red-black just after

17 Deploy in place – Rolling update
So here is a sample implementation of Rolling update. It has a pool of four instances updater with the new version of the application. Deployment Tools can be used as OpsWorks and CodeDeploy which are AWS services that I will present soon after. Or you can also use tools such as Chef and Puppet, who will take the application in a repository, for example in a bucket S3, and deploy instances. OpsWorks CodeDeploy

18 Deploy in place – Rolling update
We start out with the instance of the backend load balancer pool at ... It works well with stateless applications, much less with stateful applications. In the case of stateful, usually there is a session affinity at the load balancer for all requests from a user always happens on the same instance.? So in this case, when it comes to Trial pool, query 'users who were associated with this instance will be switched to another, they will lose their session data, thus potentially will have to log in again, etc. so it is problematic. And this is true for all deployment strategies, so we highly recommend you to develop applications stateless this is very important and it will also allow you scaler much easier.

19 Deploy in place – Rolling update
... And then installs the new version of the application on the instance.

20 Deploy in place – Rolling update

21 Deploy in place – Rolling update

22 Deploy in place – Rolling update

23 Deploy in place – Rolling update

24 Deploy in place – Rolling update

25 ElastiCache Cache Node
Red-Black deployment EC2 Instances ELB DynamoDB MySQL RDS Instance ElastiCache Cache Node Auto Scaling Group V1 Now Red-Black deployment. As I said, it's a way to deploy that fully exploits the capabilities of the cloud, and we will see how. So in our example, we have a fleet of EC2 instances that are part of a group of self-scaling and receive requests from the load balancer. The applications use multiple backend AWS services, such DynamoDB NoSQL our database, RDS our service manager for relational databases and ElastiCache our cache service.

26 ElastiCache Cache Node
Red-Black deployment ELB UAT Auto Scaling Group V1 Auto Scaling Group V2 EC2 Instances EC2 Instances The second step is to instantiate a new fleet of instances in a new group of auto-scaling, this time containing the new version of the application. So it's a very different approach because here we ever update EC2 instances. Here we only create instances called "immutable", one key bodies at the time of installation, then it is never changed, never updatées, you can even block SSH on these bodies. And clear how the instance is configured unlike instances that are more or less manually Updates long time and where we end up not knowing who did what on the machine. These new instances are on production, they are connected to the same backend services that bodies that take the traffic, red. Once it is installed, you can leave all the time to run automated functional tests on the new version of the application. DynamoDB MySQL RDS Instance ElastiCache Cache Node

27 ElastiCache Cache Node
Red-Black deployment ELB Auto Scaling Group V1 Auto Scaling Group V2 EC2 Instances EC2 Instances And once the tests are successfully passed, we can begin to redirect traffic from the load balancer. The one has a red-red mode, requests go on the old and new instances. DynamoDB MySQL RDS Instance ElastiCache Cache Node

28 ElastiCache Cache Node
Red-Black deployment ELB Auto Scaling Group V1 Auto Scaling Group V2 EC2 Instances EC2 Instances And then the rocker is completed by redirecting all traffic on the new instances. We can let the old instances on for a few minutes, it allows for a rollback easily if there is a problem on the new bodies. DynamoDB MySQL RDS Instance ElastiCache Cache Node

29 ElastiCache Cache Node
Red-Black deployment ELB Auto Scaling Group V1 Auto Scaling Group V2 EC2 Instances EC2 Instances And finally, you can delete old instances when everything is OK. So you see, rather qu'updater instances, it creates and deletes at will, one is on the cloud, and it does not cost more chèr, or very little. DynamoDB MySQL RDS Instance ElastiCache Cache Node

30 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management CodeCommit CodePipeline CodeDeploy Application Management Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy App. Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository We will now speak of the AWS services, starting with three services that were announced last reinvent last November, that manage the application life cycle. ? The goal of these services is to provide developers with the tools to be as productive as possible, to deploy into production as quickly and as frequently as possible. Each of these services can be used independently, and can be integrated with all your existing tools. We will start with CodeCommit, a version control tool. As I told you earlier, I will use the schema of the pipeline example to position AWS services and understand where they operate.

31 AWS CodeCommit Announced A secure, highly scalable, managed source control service that hosts private Git repositories Eliminates the need to operate your own source control system or worry about scaling its infrastructure Built-in encryption support Fully integrated with AWS Identity and Access Management (IAM) Basically, managed Git CodeCommit Git is a service managed at the Github. So who says managed, says any effort of installation and maintenance, you can get started in minutes. There is an automatic encryption functionality of the data stored. And of course it is well integrated with the AWS ecosystem including an AMI integration to manage rights and access to your repository. And you can store as many files as you want, in the format you want. It automatically scale to infinity.

32 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management CodeCommit CodePipeline CodeDeploy Application Management Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy App. Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository Then CodePipeline is a deployment pipeline service to Jenkins. That is why it has a fairly wide scope, on the diagram, even if it has a role to task orchestrator.

33 AWS CodePipeline Announced A continuous delivery and release automation service that aids smooth deployments You can design your development workflow for checking in code, building the code, deploying your application into staging, testing it, and releasing it to production Able to be used stand-alone as an end-to-end solution, or can be integrated with your existing source control system, test framework or build tools (like Bamboo, Jenkins, etc) So as I said it is a continuous deployment service and automation releases. So you can model your software production line to build, test, packager, deploy your applications Similar to Bamboo or Jenkins

34 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management CodeCommit CodePipeline CodeDeploy Application Management Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy App. Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository CodeDeploy is somewhat special because it is a port in an AWS tools used internally Amazon.com for over 10 years. This internal tool called Apollo and he has allowed Amazon.com to deploy 50 million times during the last 12 months.

35 CodeDeploy workflow CodeCommit Limited regions
Deploy your packaged code on a fleet of EC2 instances Can be deployed both as a single instance of tens of thousands of instances You can configure multiple deployment strategies: one instance at a time, half of the bodies or all of a sudden Automatically distributes deployments balanced on availability zones to maintain high availability during deployment Applications and deployment groups are described in files in YAML format CodeDeploy is compatible with most source tools like Github repository and continuous integration tools like Jenkins (we actually develop a plugin to integrate CodeDeploy Jenkins)

36 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Elastic Beanstalk Opsworks CloudFormation EC2 Container Service (ECS) Conclusion

37 Deployment and Management
AWS Elastic Beanstalk Automated resource management – web apps made easy AWS OpsWorks DevOps framework for application lifecycle management and automation AWS CloudFormation Templates to deploy & update infrastructure as code DIY / On Demand DIY, on demand resources: EC2, S3, custom AMI’s, etc. We have a complete tool panel, where each service meets a different need:   - EB AWS simple to deploy web applications. It automatically instantiates load balancer, EC2 instances, potentially a database and the app is deployed automatically.   - AWS OpsWorks can manage the configuration of EC2 instances through the use of Leader.   - AWS CloudFormation is the infra you code it to instantiate complete cloud stacks.   - And of course you also have the ability to manage everything yourself. Convenience Control

38 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Elastic Beanstalk OpsWorks CloudFormation EC2 Container Service (ECS) Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy App. Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository I will detail each product, starting with Elastic Beanstalk

39 AWS Elastic Beanstalk (EB)
Easily deploy, monitor, and scale three-tier web applications and services. Infrastructure provisioned and managed by EB – but you maintain complete control. Preconfigured application containers that are easily customizable. Support for these platforms: As I said a moment ago, EB to deploy web applications easily. It looks like PaaS, we love to talk too much because PaaS certainly was the simplicity of PaaS but at the same time you keep complete control over the AWS resources used, whether EC2 instances, but also the load balancer etc. EB offers default application containers, you can customize. Here is the list of supported technologies, Tomcat and Glassfish for Java, you know, and a little latter was added recently: Go Java PHP Python Ruby .NET Node.js Docker Go

40 Elastic Beanstalk model
Application Environments Infrastructure resources (such as EC2 instances, ELB load balancers, and Auto Scaling groups) Runs a single application version at a time for better scalability An application can have many environments (such as staging and production) Application versions Application code Stored in Amazon S3 An application can have many application versions (easy to rollback to previous versions) Saved configurations Configuration that defines how an environment and its resources behave Can be used to launch new environments quickly or roll-back configuration An application can have many saved configurations Well here we have a lot of text, but you need to remember is that EB Application:   - Is deployed in multiple environments (dev, test, prod)   - It has several versions   - It has several configurations An environment can run a single application version at time t.

41 Elastic Beanstalk environment
Two types: Single instance Load balancing, auto scaling Two tiers (web server and worker) Elastic Beanstalk provisions necessary infrastructure resources such as load balancers, auto-scaling groups, security groups, and databases (optional) Configures Amazon Route 53 and gives you a unique domain name (For example: yourapp.elasticbeanstalk.com) To go a little more in detail, an environment can be of 2 types:   - A single instance (ideal for dev environments)   - Or multiple instances with a load balancer and a self-scaling group And EB will also be provisioned everything you need in terms of security, potentially a database as shown on the right diagram with RDS. You also have a record on the field elasticbeanstalk.com you can hide by creating a CNAME, for example on Route 53 which is our DNS service, but you can also use the DNS provider of your choice.

42 Focus on building your application
On-Instance configuration Focus on building your application Elastic Beanstalk configures each EC2 instance in your environment with the components necessary to run applications for the selected platform No more worrying about logging into instances to install and configure your application stack Your code HTTP server Application server Language interpreter Operating system Host Instances are prepackaged, so what you have to do is ask your application stack on a key in hand. So it saves you from having to manage the installation and configuration.

43 Application versions and saved configurations
All versions are stored durably in Amazon S3. Code can also be pushed from a Git repository Saved configurations Save these for easy duplication for A/B testing or non-disruptive deployments As I said at the beginning, an application has multiple versions that are stored in S3, simply select the version you want to install in the environment of your choice. Configurations (v + environment) can be saved to be able to easily duplicated as needed.

44 Deployment options Via the AWS Management Console Via EB CLI
1 Via the AWS Management Console Via EB CLI Via the AWS Toolkit for Eclipse and the Visual Studio IDE 2 $ eb deploy 3 You have several options to deploy an application on EB:   - Either via the web administration console   - Either through a command-line tool available specifically for EB, you see, for example, we can just do "eb deploy" in the project directory and it automatically triggers the deployment EB   - You can also use Eclipse or Visual Studio plugin for it

45 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Elastic Beanstalk OpsWorks CloudFormation EC2 Container Service (ECS) Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy Conf + App. Package App. Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository Then OpsWorks which is a configuration service EC2 instances using Chef, so that offers greater flexibility with EB.

46 On-instance execution via Chef client/zero
AWS OpsWorks architecture OpsWorks 1 Command JSON 3 Command Log+Status On-instance execution via Chef client/zero 2 Schematically OpsWorks will launch orders to JSON EC2 instances. On the instance, one or more recipes are executed through Head Chef Solo. And of course there is the ability to view the logs and execution status in Chief OpsWorks.

47 Chef integration Supports Chef 11.10
Built-in convenience cookbooks / bring your own Chef run is triggered by lifecycle event firing: push vs. pull Event comes with stack state JSON

48 OpsWorks components stack layer instance app
A stack represents the cloud infrastructure and applications that you want to manage together A layer defines how to setup and configure a set of instances and related resources. Eg Java App server layer, PHP layer, RDS layer, MySQL Layer, HAProxy layer etc An instance represents an Amazon EC2 instance and defines how to scale: manually, 24/7 instances, or automatically, with load-based or time-based instances Each application is represented by an app, which specifies the application type and contains the information that AWS OpsWorks needs to deploy the application from the repository to your instances A Stack is a set of AWS resources in the region, areas of availability and OS that defines it. A Layer is an instance type, for example there are instances like Tomcat, and MySQL type instances, etc. and thus contains the information necessary for OpsWorks need to install software, for example with Chef recipes. A Forum is an EC2 instance that belongs to a Layer and for which we define its startup parameters and scalability. The App contains the information necessary for OpsWorks to deploy an application on an instance, for example with Chef recipes.

49 Instance lifecycle commands
There are 5 types of events corresponding to the life cycle of applications OpsWorks. An event = a command sent to an instance So we setup, configure, deploy, undeploy, shutdown. The sets used if an instance enters or leaves the instance pool. In this case, an event is sent to all other members of the pool so that it can include updating the configuration of the cluster topology.

50 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Elastic Beanstalk OpsWorks CloudFormation EC2 Container Service (ECS) Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy App. Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository Now CloudFormation, which is an Infrastructure as code tool. This is a service that enables the deployment of infrastructure only.

51 AWS CloudFormation Infrastructure as Code
Integrates with version control JSON format Templates Stacks Supports all AWS resource types The principle is simple, you configure templates in JSON format represents an AWS stack and you versionnez in a version control system like Git. CloudFormation supports all types of AWS resources.

52 Template File Defining Stack
Application stack example Use the version control system of your choice to store and track changes to this template Template File Defining Stack Test Dev Prod Build out multiple environments, such as for Development, Test, and Production using the template Git Subversion Mercurial With a diagram it looks like this: a template file that defines the complete stack of the necessary infrastructure to deploy an application. This file can be stored in the version control system you want. And this template will allow you to easily instantiate as many of environment you want, for example, dev, test and prod. The entire infrastructure can be represented in an AWS CloudFormation template. Architecting on AWS – Overview of Services for Web Applications

53 Template anatomy { "Description" : "Create an EC2 instance.”,
"Resources" : { "Ec2Instance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "KeyName" : “my-key-pair”, "ImageId" : "ami-75g0061f”, “InstanceType” : “m1.medium” } Here is a simple example of CloudFormation template. Here we describe a type of EC2 instance m1.medium based on a specified AMI and on which deploys the pair of keys "my-key-pair"

54 Template anatomy { "Description" : "Create an EC2 instance.”,
"Parameters" : { "UserKeyName" : { "Description" : "The EC2 Key Pair to allow SSH access to the instance", "Type" : "String" } }, "Resources" : { "Ec2Instance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "KeyName" : { “Ref” : “UserKeyName”}, "ImageId" : "ami-75g0061f”, “InstanceType” : “m1.medium” We can make some configurable properties to be informed when instantiating the template. For example, here we define the parameter key pair, which allow the user template to put his hand.

55 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Elastic Beanstalk OpsWorks CloudFormation EC2 Container Service (ECS) Conclusion Version Control Build/ Compile Code Dev Unit Test App Code IT Ops DR Env Test Env Prod Env Dev Env Application Write Infrastructure Deploy Containers Package Build AMIs Validate Templates Infra Code Infras Automate Artifact Repository And finally, EC2 Container Service. I will not teach you what Docker, I guess you Adrian has already spoken. In two words for those who have not had the opportunity to be interested, Docker is the technology that has democratized the concept of containers that has existed for some time with LXC, cgroups, which are enabling technology isolate runtime environment on a single Linux host. This massive adoption Docker is that they greatly simplify the use of containers with a simple API:  - 1 command to package a picture  - 1 command to start a container image-based ... The problem with Docker is that it is still very young and it's still very complicated to manage a Docker environment in production with several containers distributed instances on a machine fleet. So it works very well for development environments, but as it stands it is not comprehensive enough to be used in production

56 EC2 Container Service (ECS)
Cluster management made easy Flexible scheduling High performance Resource efficiency Extensible Security Programmatic control Docker compatibility Monitoring AWS integration ECS is a container cluster management service that will precisely address this problem to deploy industrial way of Docker containers. You can easily manage a cluster of containers with flexible scheduling. The advantage is that everything is based on EC2 behind, so you enjoy the full performance of EC2 instances, but also features:  - Network with VPC  - Safety with Security Groups  - Monitoring with CloudWatch So of course everything is exposed as API, I forgot to mention but ALL the AWS resources can be managed through the REST API. The aministration console and command line tools are only public API clients. One of the highlights of ECS is a cluster management and scheduling are two separate functions. These are independent components, and in particular it allows to connect a third scheduler, we have for example developed a plugin that allows Mesos scheduler tasks from Mesos.

57 User workflow 1 I have a docker image I want to run in a cluster Customer 2 Customer Push images 3 Customer Create task definition Similar to fig template 4 Customer Run instances Use custom AMI with docker support and ECS agent. ECS agent will register with default cluster 5 Customer Describe cluster Get information about cluster and available resources

58 User workflow 6 7 8 Initial cluster state Run task New cluster state
Customer Run task Run task Customer Describe cluster 8 New cluster state

59 Agenda Introduction to Continuous Integration (CI) and Continuous Delivery/Deployment (CD) CD strategies CI-CD on AWS Application Lifecycle Management Application Management Conclusion

60 “Build your datacenter in 5 minutes.”
 Infrastructure as Code

61 “Treat your instances as cattle!”
 Feel free to create and terminate instances

62 “If it moves, plot it.”  Measure everything

63 “If it hurts, do it more often.”
Automate everything (for security, efficiency and business agility)

64 Questions?


Download ppt "Continuous Delivery on AWS"

Similar presentations


Ads by Google