Download presentation
Presentation is loading. Please wait.
1
AWS core services Compute, Storage, Network
| piergiorgio malusardi | Solution Architect – Public Sector | Amazon Web Services | 14/05/2019
2
AWS Global Infrastructure
21 Geographical Regions, 64 Availability Zones, 160+ PoPs Region & Number of Availability Zones (AZs) GovCloud (US) Europe US-East (3), US-West (3) Frankfurt (3) Ireland (3) US West London (3) Oregon (3) Paris (3) Northern California (3) Stockholm (3) US East Asia Pacific N. Virginia (6), Ohio (3) Singapore (3), Sydney (3), Tokyo (4), Osaka-Local (1)* Canada Seoul (2), Mumbai (2) Central (2) China South America Beijing (2), Ningxia (3) São Paulo (3) Announced Regions Four Regions and 12 AZs in Bahrain, Cape Town, Jakarta, and Milan [TALKING POINTS] AWS has the largest global infrastructure footprint, with 20 Regions, 60 AZs with one or more data centers, and 160 points of presence (PoP), with 149 as Edge locations. And this footprint is constantly increasing - at a significant rate. And this footprint is constantly increasing—at a significant rate. AWS Global Infrastructure: AWS CloudFront Locations: AWS Direct Connect Locations: * Available to select AWS customers who request access. Customers wishing to use the Asia Pacific (Osaka) Local Region should speak with their sales representative.
3
AWS Region Design AWS Regions are comprised of multiple AZs for high availability, high scalability, and high fault tolerance. Applications and data are replicated in real time and consistent in the different AZs AWS Availability Zone (AZ) AZ Transit Datacenter AWS Region [TALKING POINTS] Unlike other cloud infrastructure providers, each AWS Region has multiple Availability Zones and each AZ has multiple, physically separated data centers. By comparison, some cloud providers claim to have just over 50 regions, yet most of the regions have no AZs, and just one data center, so it is not comparable to the multi-AZ, multi-data center AWS Region. Each region also has two independent, fully redundant transit centers that allow traffic to cross the AWS network, enabling regions to connect to the global network. Further, we don’t use other backbone providers for AWS traffic once it hits our backbone. A Region is a physical location in the world where we have multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities.
4
Introduction to Services and Categories
In this module you will cover the services and categories for AWS and a little bit about AWS documentation. AWS offers a broad set of global cloud based products that can be used as building blocks for common cloud architectures. Each product offers a variety of services. Some of the categories discussed in this module include Compute, Storage, Database, Networking & Content Delivery and Security, Identity & Compliance. Let’s take a moment to look at each of these categories.
5
Network Services
6
So, let's draw in the backbone. This is the AWS global network
So, let's draw in the backbone. This is the AWS global network. It spans the globe, it connects all of our regions. Each of these links is an Amazon control redundant 100 GbE fiber, often providing many terabits of capacity between regions. That's completely private network capacity that's for AWS’s use only and carries all of our customer traffic between regions. The other thing that's really interesting is this network is not a network we had lying around before we started AWS. It's not something that Amazon used back in the day. It's actually something that's been built specifically for the cloud, and we continue to iterate on it. It's also prioritizing our customers' traffic. There's no other traffic that we have on our network that we would prioritize over anything our customers are doing. Tuning this network and keeping a highly available, low latencies, predictable performance for our customers is one of our highest priorities. An important point here is that this network was built for the cloud and optimized for cloud-based workloads. It’s not a network that we happened to have lying around when we got into the cloud business or one that we prioritize some other traffic on – it’s build for the cloud and dedicated to making sure that our customers have a flawless experience when using it. AWS Global Network Redundant 100 GbE network Private network capacity between all AWS Region, except China © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential
7
VPC – Virtual Private Cloud
AWS Cloud Availability Zone 1 Availability Zone 2 VPC Destination Target Status /16 local Active /0 IGW Destination Target Status /16 local Active /16 VPN Routing Table 1 /0 NAT GW Active Routing Table 2 VPN Gateway
8
Elastic Load Balancing security tools
ReInvent 2018 11/1/2019 1:35 PM Elastic Load Balancing security tools TLS offloading Offload TLS processing to Application Load Balancer SNI support Allows for multiple TLS certificates per load balancer Access logs Request logging for all requests received by the load balancer Application firewall Integrated with the Website Application Firewall (WAF) ELB provides a number of built in security features, from TLS Offloading, to SNI support – where you can host up to 25 TLS certificates on a simple load balancer, to access logs and a fully fledges Website Application Firewall. Earlier this year we launched User Authentication for ALB, which allows the load balancer to handle the login and authentication for all users to your website. Simply configure the load balancer with the identity provider of your choice, and we’ll take care of the rest – directing the user to the necessary login page if needed. Your application just needs to read the headers when the fully authenticated request makes it way through to the backend. Yet another way in which we’re making it easy to adopt the very best security practices and lowering the cost to you and your application developers – we’re very excited about this feature! User Authentication Secure your application by offloading user authentication to Application Load Balancer, including support for federated identities. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
9
Compute Services
10
Broadest and deepest platform choice
Amazon EC2 Amazon EC2 Virtual server instances in the cloud Amazon ECS, EKS, and Fargate Container management service for running Docker on a managed cluster of EC2 AWS Lambda Serverless compute for stateless code execution in response to triggers Compute is at the core of nearly every AWS customers’ infrastructure, whether it be in the form of instances, containers or serverless compute. We are delivering choice in how you consume compute to support existing applications and build new applications in the way that suits your business and applications needs. And within each of these areas, we are rapidly adding completely new capabilities.
11
Broadest and deepest platform choice
Linux | Windows Arm and x86 architectures General purpose and workload optimized Bare metal, disk, networking capabilities Amazon EC2 Packaged | Custom | Community AMIs Instances is the most mature area of our compute platform with deep investment and long running proven experience. It is also where customers have the greatest need for choice to support their current and future applications. For instances, we offer choice across a number of dimensions. You have your choice of operating systems with Linux and Windows as well as choice of architectures with support for X86 and Arm workloads. For those workloads, we have instances which are general purpose as well as optimized for specific needs such compute-optimized for HPC workload or memory-optimized for big data and analytics. Over the last year, we have introduce new capabilities to enhance our instances with bare metal, attached SSD and most recently, enhanced networking. These instances are packaged for you in many ways – you can choose one of our AMIs, you can customize your own images or you can select from additional varieties of AMIs provided by our community. And those instances are available through flexibility in purchase models to meet your business and budget needs. Multiple purchase options: On-demand, RI, Spot
12
Broadest choice of processors and architectures
ReInvent 2018 11/1/2019 1:35 PM Broadest choice of processors and architectures Intel® Xeon® Scalable (Skylake) processor NVIDIA V100 Tensor Core GPUs AMD EPYC processor AWS Graviton Processor Beyond the operating system, we are providing you the choice of processor and architecture to build the applications you need with the flexibility in choice that you want. We believe that by providing greater choice, customers can choose the right compute to power their application and workload. We have had a rich and long-term partnership with Intel and the Skylake processor is key to powering our most powerful instances. NVDIA helps to power your machine learning and graphics workloads. In early November, we announced our support for AMD and the AMD EPYC processor and we are the only cloud with AMD available today. Lastly we announced that AWS has released a new processor, the Graviton processor, based on Arm-architectures. Now we are the only major cloud provider to support Arm workloads. Customers have told us processor choice matters to them and we are already seeing customer testing their apps with these new instances and processors. Right compute for the right application and workload © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
13
AWS support for Arm-based applications
Optimized cost and performance for Arm-based applications AWS Graviton Processor with Arm-based cores and customized silicon Up to 45% cost savings, higher price/performance Similar to AMD, we have also announced we are introducing new A1 instances. With this, we continue to offer additional choice for customers and we are the only cloud today with support for Arm workloads. These A1 instances provide better price and price/performance versus comparable x86 based instances and many customers may see up to 45% cost savings based on their workload. Ideal for scale-out workloads including containerized microservices, web and ecommerce sites. These instances are powered by AWS Graviton Processors that feature Arm-based cores and customized silicon, built with AWS extensive expertise in cloud infrastructure. These processors feature 2.3 GHz processor with up to 32 GiB of memory, 10 Gbps of network performance and EBS optimized burst. Some of you may be asking why we are doing this. This work is fundamentally grounded in our customer obsession to deliver best price/performance in the cloud. Just as with AMD, not all customer workloads need the same capabilities and compute power as high end workloads. There is an opportunity to build a processor that better fits these workloads for cost and capability. We plan to continue on this journey to continue to deliver new innovation and cost savings for our customers. Customers especially those with Arm workloads can help onboard new applications and innovate with us in the cloud with these new instances. Ideal for scale-out workloads including web and e-commerce sites
14
EC2 instances Amazon Lightsail T3 M5 M5d D2 H1 R5 R5d R5m X1 X1e I3
High I/O I3 I3m C5 C5d G3 P3 F1 z1d z1dm Virtual private servers General purpose Memory-optimized High I/O Compute-optimized Graphics intensive FPGAs Compute and memory intensive Dense storage Memory intensive General-purpose GPU Burstable In-memory Big data optimized
15
High memory instances: certified for SAP HANA
NEW! EC2 High Memory Instances 6 TB 9 TB 12 TB Up to 12TB Memory; SAP-Certified Custom Intel® Xeon® Scalable Processor Native to AWS; Out-of-box integration Simple management: AWS CLI, Console, IAM Flexibility to scale; Resize in minutes 244 GB 488 GB 1 TB 2 TB 4 TB 768 GB A new class of EC2 instances built for running mission-critical deployments of SAP HANA EC2 Bare Metal instances; available on EC2 Dedicated Hosts in Amazon VPC o Launch and manage the instances through the AWS portal and CLI/SDK o Out-of-the-box connectivity to other Amazon EC2 Instances without the need for custom networking configuration or hybrid architecture o Take advantage of other AWS services, including EBS, S3, Cloud Formation, CloudWatch, AWS Config, etc. o Available within the same IAM framework as the rest of AWS for authentication, authorization, and auditing First ever EC2 instances powered by an 8-socket server platform based on Intel Skylake processors • EBS as the storage for running HANA. Take advantage of elastic storage capacity. [Key competitive differentiation] • Just like SAP certified EC2 X1 and X1e instances, designed to support enterprise-class data protection and business continuity capabilities, including multi-AZ high availability and multi-region disaster recovery configurations, backups, and EBS snapshots.
16
AWS container services landscape
Amazon Elastic Container Service for Kubernetes Management Deployment, Scheduling, Scaling & Management of containerized applications Amazon Elastic Container Service Hosting Where the containers run Amazon EC2 AWS Fargate The current AWS container services landscape covers a broad set of products. At the orchestration layer we’ve Amazon ECS and Amazon EKS. EKS makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. You can currently run your containers on ECS using either the EC2 launch type – where get to manage the the underlying instances on which your containers are running - or you can choose to run your containers in a serverless manner with the AWS Fargate launch type. Finally, we provide a registry services, Amazon ECR, where you can store your container images. Image Registry Container Image Repository Amazon Elastic Container Registry
17
Making development easier with AWS Lambda
ReInvent 2018 11/1/2019 1:35 PM Making development easier with AWS Lambda Accessible for all developers Greater productivity Enable new application patterns Support for all runtimes with Lambda Layers and Runtime API ISO, PCI, HIPAA, SOC, GDPR, and FedRamp compliances Toolkits for popular IDEs: VSCode, IntelliJ, and PyCharm Simplified deployment with nested apps 15 minute functions SQS for Lambda Automatic Load Balancing for Lambda Support for Kinesis Data Streams Enhanced Fan-Out and HTTP/2 We just talked about how we remove responsibility for managing infrastructure with Lambda. But you still have to write the code. And we want to make it easier to write application code with Lambda. Lambda serves trillions of executions each month, and so we’ve prioritized making it easier to build lambda functions quickly. Trillions of executions every month for hundreds of thousands of active customers © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
18
AWS Storage Services Thank you for the opportunity to discuss our AWS Compute services which spans Amazon EC2, Containers, and Serverless. My name is _______ and I am the _______ for AWS.
19
More choice for more applications
File storage EFS Standard EFS Infrequent Access FSx for Windows Re-host Block storage Re-platform General Purpose SSD Provisioned IOPS SSD Amazon EFS AWS Storage Gateway Family Throughput-Optimized HDD Amazon EC2 Cold HDD Elastic Volumes Amazon FSx for Windows File Server Object storage Amazon EBS S3 Standard Amazon FSx for Lustre Backup S3 Standard-IA S3 One Zone-IA AWS Backup AWS has been helping customers on their cloud journey for almost 13 years now. We’ve helped customers like Netflix, that built their entire infrastructure on AWS, to General Electric, who moved 9000 applications from on-premises to AWS, to Pinterest, that started on AWS and continues to build on top of it today. That is why AWS storage is so broad and deep – it’s feedback from over a decade of use at scale that drives 90-95% of our roadmap. And, we’ve learned a lot. I wanted to share some observations of adoption patterns we see for how customers have taken this cloud journey with storage. Let’s talk about re-host first. This is typically “lift and shift” where you take your workloads, typically delivered on virtual machines, and migrate it as-is onto AWS. It’s also a good way to get some quick wins. GE didn’t start moving to the cloud by saying we’re going to move 9000 applications. GE started their cloud journey with an aggressive tops-down goal of moving 50 applications in 30 days. When you take a strong leadership position like that, your organization will find a way to get those quick wins, whether it’s rehosting or any of these other patterns that I’m going to talk about. When Lionsgate decided to move (‘Lift-And-Shift’) their SharePoint and SAP deployments to AWS, they were able to reduce their time-to-deployment from weeks to days or hours. This quicker turnaround has been a win for their IT department and opened the doors to additional workloads moving from test and development to production. Using AWS has allowed them to avoid investing in a new data center, saving $1M+ in three years. They estimate that AWS will save them 50% over a traditional hosting facility. Re-platform means lifting an existing application to the cloud and adopting bits and pieces of the new AWS platform to change your application. In the database world, [1]you could spin up EC2 instances, install MySQL, and then migrate a database. However, you could re-platform by using our RDS service and let AWS manage that database for you. In the storage world, that means taking an existing application that depends on an underlying shared file system, that an on-prem Network-attached Storage server might provide for example, and spinning up a new EFS file system that can be a drop-in replacement for the application’s storage layer. A fully managed file system like EFS removes all of the provisioning and administration overhead of an on-prem product and let’s you focus on migrating the application itself to the AWS Cloud. This allowed the BBC to migrate their Red Button service to the AWS Cloud without having to spend the time or money to re-write core components that depend on a POSIX-compliant file system, and for less cost than running their on-prem NFS server. This approach also allows you to get those quick wins. You’re still using similar technologies but have moved over to managed implementations like EFS. It’s an easy optimization. And finally re-architecting, which is building your application to take [2] full advantage of AWS cloud services. This also applies to any new application that you are writing today. There is no reason you shouldn’t be writing your new applications in the cloud. You get the advantages from the start. But it’s also game changing for your existing business critical applications. Many times this is used as a way to modernize [3]applications you have been running for years. FINRA is the regulatory body of the US stock exchanges. Five years ago, when FINRA was first moving to the cloud, they looked at different options for their cloud journey. They could have started with any of the options you see here, but decided to go with re-architecting their core application to detect fraud within 24 hours of market close. They did that because it was too important not to. FINRA believed that they needed the cloud for the core of their mission which was to be the watchdog of the consumer investor, and the only way to do that was to achieve the maximum agility that they could possibly get. And now FINRA is doing 500 billion validation checks on trades daily, all running on AWS. [4]And it’s fairly common to use one or more of these patterns together when charting your course and journey to the cloud! You told us you needed an SMB file storage solution. You told us you needed a better solution to run Lustre for your HPC and Machine Learning workloads. You told us you wanted a solution to back up key AWS resources. Our roadmap is driven by your feedback. We’re expanding our portfolio with 3 new storage classes and 2 new file storage services. 1/ Amazon S3 Intelligent-Tiering is a new S3 storage class that automatically optimizes customers’ storage costs for data with unknown or changing access patterns by moving data to the most cost-effective storage tier. 2/ Amazon S3 Glacier Deep Archive is a new storage class that delivers the lowest cost of any storage service, at less than 1/10th of one cent per gigabyte per month. 3/ Amazon FSx for Windows File Server provides fully managed Windows-based shared file storage designed to help customers lift-and-shift their applications to AWS. 4/ Amazon FSx for Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-performance computing, machine learning, and media data processing workflows. 5/ Amazon EFS IA is a new storage class for Amazon EFS that is designed for files accessed less frequently, enabling customers to reduce storage costs by up to 85% compared to the EFS Standard storage class. 6/ And we’ve also announced two new data transfer services that I’ll talk about a little later. S3 Glacier S3 Intelligent-Tiering Amazon S3 S3 Glacier Deep Archive Re-architect
20
Your choice of Amazon S3 storage classes
11/1/2019 1:35 PM Your choice of Amazon S3 storage classes S3 Glacier Deep Archive S3 Standard S3 Intelligent-Tiering S3 Standard-IA S3 One Zone-IA S3 Glacier Frequent Access Frequency Infrequent Active, frequently accessed data Milliseconds access > 3 AZ $0.0210/GB Data with changing access patterns Milliseconds access > 3 AZ $ to $0.0125/GB Monitoring fee per Obj. Min storage duration Infrequently accessed data Milliseconds access > 3 AZ $0.0125/GB Retrieval fee per GB Min storage duration Min object size Re-creatable, less accessed data Milliseconds access 1 AZ $0.0100/GB Retrieval fee per GB Min storage duration Min object size Archive data Select minutes or hours > 3 AZ $0.0040/GB Retrieval fee per GB Min storage duration Min object size Archive data Select 12 or 48 hours > 3 AZ $ /GB Retrieval fee per GB Min storage duration Min object size © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
21
Infrequent access tier
S3 Intelligent-Tiering Automated storage tiering for data with changing access patterns Frequent access tier Infrequent access tier S3 Lifecycle works great if your data has predictable access patterns Many customers have data with changing access patterns – maybe access frequency cools off for a little bit and then heats back up again when you run a big analytics job. So, we created S3 Intelligent-Tiering, a new storage class that automatically optimizes storage costs for data with unknown or changing access patterns. Built using an ML model to train the S3 Intelligent-Tiering algorithm. S3 Intelligent-Tiering is the first and only cloud storage solution to provide dynamic tiering that makes it easier than ever to optimize storage costs. 1/ Optimize storage costs - uses smart monitoring and auto-tiering to automatically move data between frequent and infrequent access tiers for cost optimization. 2/ No management required - Customers just load their data into the S3 Intelligent-Tiering storage class, and it cost-optimizes your storage. If data goes unaccessed for 30 days, S3 Intelligent-Tiering moves it to the infrequent access tier. Then, when data in that infrequent access tier is accessed, it is automatically promoted back up to the frequent access tier WITH… 3/ No retrieval fees - customers don’t have to worry about unexpected bill spikes when data access patterns change because S3 Intelligent-Tiering automatically promotes data to the frequent access tier when it is accessed. Supports all of the capabilities of the rest of the S3 storage classes, like S3 Lifecycle policies to automatically move data to S3 Glacier and S3 Glacier Deep Archive, Cross-Region Replication to any other AWS Region, etc. So, with S3 Intelligent-Tiering now available for your more active data sets, we’re turning our attention to the other end of the storage spectrum…
22
S3 Glacier Deep Archive Lowest cost storage class for long-term archiving and digital asset preservation $ per GB-month Fully managed without tape burden Designed for % durability Recover data in 12 hours We’ve also heard from customers, especially those in industries with large data sets that they want to retain for a long period of time – that they wanted to get out of the business of hosting their own tape infrastructure. Some customers used Glacier to get rid of that tape infrastructure. Glacier is a great service for archives like media archives, medical records, and regulated financial services data. These are data sets that mature but that you may access SOME subset of every month. Users can restore archives from Glacier in 1-5 minutes or for a lower cost option, up to 12 hours. Glacier storage is as low as four-tenths of a cent per gigabyte-month. Customers have found Glacier to be so efficient that they use it, along with other S3 storage classes, as part of an ACTIVE archive today. Yet we have seen customers across all industries, like oil and gas, that still use tape to store large amounts of seismic data for lowest cost retention. So for data sets like seismic data, that you want or are required to retain, but only access every few months...or years, is tape still the best option? Tape capacity is cheap, but it carries lots of operational costs Then, after all of that work, restores can be hard - it still might take 2-3 tapes and sometimes days or even weeks to get a good restore of the data. 1/ Less than 1/10 of one cent per GB-month - with 11 9s of durability and resilience across a minimum of 3 Availability Zones, Glacier has been the best archive storage value in the industry at ~$4.10 per terabyte per month. But at a price of barely over $1 per terabyte per month, Deep Archive makes it silly to even think about managing your own on-premises tape infrastructure. 2/ Fully managed without the tape burden - Now, you don’t have to worry about the muck of maintaining tape infrastructure. We manage everything needed for long-term retention, including periodic data validation, fixity checking, and the assurance you need that your data will be there when you need it - tomorrow, or 30 years from now. So, you get an even lower storage cost than tape with none of the hassle. 3/ Designed for 11 9s of durability - Deep Archive is designed for the same 11 9s of data durability available from every other S3 storage class, which means that if you store 1 million objects in S3 Glacier Deep Archive, you only risk losing one object every 10,000 years. 4/ Recover data in hours - S3 Glacier Deep Archive allows you to recover your data in 12 hours or less, versus the days or weeks required to recover data from tapes that are stored off-site. Your data is an asset, but it’s only an asset when you can actually afford to keep it around. The innovations we’ve brought to market in S3 & Glacier have really set the standard for storage cost-efficiency.
23
Amazon EBS: Built for dynamic workloads
ReInvent 2018 11/1/2019 1:35 PM Amazon EBS: Built for dynamic workloads Simple High Performance Reliable Elastic volumes: Adjust size and tune performance with no disruption Back up data on EBS volumes using point-in-time snapshot capability Data Lifecycle Manager Optimized for low-latency or high throughput 2x performance improvement for PIOPS SSD volumes 60% improvement in gp2 SSD volumes performance Managed block storage for enterprise applications Control and encryption through Key Management % availability Massive scale and flexibility We’ve also made some significant investments to improve performance in Elastic Block Storage. EBS is our primary solution for durable, high performance storage for your instances. We like to think of EBS as three main things: simple to use, performant for customer needs, and reliable. We continue to invest in making EBS as easy and simple to use as possible. We know you aren’t always going to know what your application requirements are ahead of time, and we also recognize these change over time. We are the only cloud provider that lets you adjust your volume, both size and performance, on the fly with no disruption to your workload. We call this capability elastic volumes. Our customers love point in time snapshot capability. Snapshots allow customers to back up data on your EBS volumes. Snapshots are incremental, which means you save on costs and only backup data that has changed since the last backup. Data Lifecycle Manager was launched in July, and provides simple, automatic backups and retention schedules for your snapshot creation based on lifecycle policies. With DLM, you won’t need to use custom scripts to manage your backups or snapshots. Everything is simple, automated, and managed for you. EBS performance is going to meet the vast majority of customer needs, but we know some customers are very sensitive to performance. Today we are launching double the peak performance for io1 SSD volumes and a 60% increase in peak performance for gp2 SSD volumes. [ This gives us the highest performance SSD volume that can be supported TODAY ] – potential talking point but still needs to be confirmed over time, because as of now Azure does not have the compute to support Ultra SSD Reliability is often what it comes down to when choosing to migrate to the cloud, and with EBS being one of our fundamental solutions, reliability and security are priorities as continue to grow performance and simplicity. EBS volumes are highly available, designed for % availability. EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the unlikely failure of any single component. Based on our annual failure rate, EBS volumes are 20 times more reliable than typical disk drives. To illustrate our % annual fail rate, if you have 1,000 EBS volumes running for 1 year, you should expect 1-2 will fail. Snapshot feature and DLM of course make it easy to back up your data and avoid losing anything. What we have found crucial is that all companies, but especially Enterprises, need security and control over their data. Key Management Service encryption ensures full control over security in your organization. EBS used the highly reliable and secure Key Management Service for encryption. When you attach an encrypted EBS volume to an EC2 instance, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. All EBS volume support this feature, and you simply access encrypted volumes the same way you access other volumes, with no additional action. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
24
Amazon Elastic File System
Simple Elastic Scalable Fully managed Highly reliable regional design Secure No re-architecting required Automatically grows and shrinks Lower TCO than DIY or on-prem Consistent IOPS Consistent throughput Flexible client connectivity EFS is the first fully managed NFS cloud file system built from the ground up for file sharing between EC2 instances. In the past, you had to own and operate a NAS array, or build and manage a file system using a 3rd party software package on top of EC2 and EBS. EFS is fully managed, so we take on all of the management around capacity and compute. No more surprises! It’s so simple you can mount multi-petabyte shared storage on EC2 instances in moments. It automatically grows elastically and shrinks as you add or remove data It keeps consistent performance as your data scales, and it can connect to on-prem hosts as well as EC2 instances in multiple regions. EFS stores data redundantly across multiple AZs in a region by default. Typical use cases include web serving and content management, enterprise applications, media and entertainment processing workflows, home directories, database backups, developer tools, container storage, and big data and analytics application
25
Amazon FSx for Windows File Server Lift and shift your Windows file storage with fully managed Windows file servers Native Windows compatibility back to Windows 7 Fast and flexible performance Ready for Enterprise Apps like ERP & CRM Connect to Amazon EC2, WorkSpaces, Appstream & VMware Cloud on AWS Handles patching and other maintenance Amazon FSx for Windows File Server provides fully managed Windows file servers to easily lift and shift business applications to AWS. Many of these enterprise applications rely on SMB. And just like with our existing file storage service, Amazon EFS, you pay for only the resources used, with no upfront costs, minimum commitments, or additional fees. Native Windows compatibility Built on Windows Server, you get native Windows file storage that you can access over SMB that supports the Windows file system features that you use today and works seamlessly with Microsoft Active Directory. Fast and flexible performance Built on SSD-storage, Amazon FSx provides fast performance with per file system throughput of up to 2 gigabytes per second, tens of thousands of IOPS, and consistent sub-millisecond latencies. Similar to EFS, you can pick throughput levels, independent of your file system size. Performance can scale up to tens of gigabytes per second of throughput, across hundreds of petabytes of data by using DFS Namespaces. Enterprise ready Amazon FSx provides the features, performance, and security that enterprise applications like ERP, CRM, custom .NET applications, and home directories, rely on. It provides the throughput, IOPs, and consistent sub-millisecond latencies needed for enterprise workloads. Broad accessibility By supporting the SMB protocol, you can connect your file system to Amazon EC2, VMware Cloud on AWS, Amazon WorkSpaces, and Amazon AppStream 2.0. With Microsoft Active Directory support, you get integration with your current Windows environment, and Distributed File System (DFS) Replication supports your multi-AZ deployments. Amazon FSx supports creating multiple file shares for each file system across multiple instances. Simple and fully managed You no longer have to worry about setting up and provisioning file servers and storage volumes, operating Windows file server software updates and patches, and manually performing backups. In minutes, you can easily create a fully managed file system with automatic backups by using the AWS Management Console, CLI, or AWS SDK. Secure and compliant All of your data is automatically encrypted at-rest and in-transit. Amazon FSx for Windows File Server is PCI-DSS compliant and HIPAA eligible. You can control user access with Windows Access Control Lists (ACLs) and all of your data is protected by automatic highly-durable daily backups. It uses Windows Access Control Lists (ACLS) for file access control. Integration with CloudTrail monitors and logs your API calls letting you see actions taken by users on your Amazon FSx resources. Cost-effective You pay only for the resources you use, with no minimum commitments or up-front fees. You can launch and delete file systems in minutes, making it easy to respond to changing business needs. Amazon FSx is fully managed, and reduces the time and resources needed to manage your file storage.
26
Amazon FSx for Lustre For compute-intensive data processing use cases like HPC or Machine Learning
Raw data stored in S3 is loaded to FSx for Lustre for processing Amazon FSx for Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-performance computing, machine learning, and media data processing workflows. Many of these applications require the high-performance and low latencies of scale-out, parallel file systems. Requires specialized expertise and administrative overhead (provision storage servers, tune performance). FSx for Lustre can process massive data sets at up to hundreds of gigabytes per second of throughput, millions of IOPS, and sub-millisecond latencies. Seamlessly integrated with Amazon S3 - link long-term data sets with high-performance file systems to run compute-intensive workloads. Copy data from S3 to FSx for Lustre, run workloads, then write the results of the processing back to S3 for retention. Output of processing returned to S3 for retention
27
Storage Gateway hybrid storage solutions Use standard storage protocols to access AWS storage services Customer Premises Amazon S3 Application servers Direct Connect NFS Internet Amazon Glacier iSCSI Enterprise storage VTL File Volume Tape Amazon EBS snapshots “Enable cloud storage on-premises as part of your AWS platform” “Native access Industry standard protocols for file, block, and tape Secure and durable storage in Amazon S3 and Glacier Optimized data transfer from on-premises to AWS Low-latency access to frequently used data Integrated with AWS security and management services Amazon CloudWatch AWS KMS Backup servers Amazon VPC AWS CloudTrail AWS IAM
28
Thank you!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.