Download presentation
Presentation is loading. Please wait.
1
Introduction to Cloud Computing
Dr. Sanjay P. Ahuja, Ph.D. FIS Distinguished Professor of CIS School of Computing College of Computing, Engineering, and Construction UNF 1
2
Cloud Computing: Towards Utility Computing
“Computing may someday be organized as a public utility” - John McCarthy, MIT Centennial in 1961 Huge computational and storage capabilities available from utilities Metered billing (pay for what you use) Simple to use interface to access the capability (e.g., plugging into an outlet)
3
New Aspects of Cloud Computing from a Hardware Point of View
The illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning. The elimination of an up-front commitment by Cloud users, thereby allowing companies to start small and increase resources only when there is an increase in their needs. The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful. . All three are important to the technical and economic changes made possible by Cloud Computing. Past efforts at utility computing failed, and in each case one or two of these three critical characteristics were missing. For example, Intel Computing Services in required negotiating a contract and longer-term use than per hour.
4
Objectives of Cloud Computing
Elasticity: Ability to scale virtual machine resources up or down. On-demand usage: Ability to add or delete computing power (CPU, memory), and storage according to demand. Pay-per-use: Pay only for what you use. Multi-tenancy: Ability to have multiple customers access their servers in the data center in an isolated manner. .
5
Why Now? The construction and operation of extremely large-scale, commodity-computer datacenters at low cost locations was the key necessary enabler of Cloud Computing. This uncovered factors of 5 to 7 decrease in cost of electricity, network bandwidth, operations, software, and hardware available at these very large economies of scale. These factors, combined with statistical multiplexing to increase utilization, meant that cloud computing could offer services below the costs of a medium-sized datacenter and yet still make a good profit. Other factors: Pervasive broadband Internet Fast x86 virtualization Pay-as-you-go billing model Mobile apps Rise of analytics (BI, Data Mining) .
6
Definition of Cloud Computing
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (NIST Definition of Cloud Computing) [1]. This cloud model is composed of five essential characteristics, three service models, and four deployment models. . Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time. Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches
7
Cloud Computing Taxonomy
.
8
Essential Cloud Characteristics
.
9
Essential Cloud Characteristics
On-demand self-service: customers can provision computing capabilities. Broad network access: Resources are available over the network through standard mechanisms. Resource pooling: the provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model. Rapid elasticity: capabilities can be rapidly and elastically provisioned, preferably automatically. Measured service: Resource-usage is monitored and automatically controlled and optimized. The organization provides transparency for both itself and the customer of the utilized service. .
10
Cloud Characteristics
11
Cloud Service Model Architectures
. SalesForce Google App Engine
12
Cloud Service Models Cloud Infrastructure as a Service (IaaS)
The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers). Examples: Amazon, RackSpace Cloud Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., Java, Python, .Net). The consumer only has control over the deployed applications and possibly application hosting environment configurations. Examples: Google AppEngine, Microsoft Azure Cloud Software as a Service (SaaS) The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser. The consumer only has control over limited user-specific application configuration settings. Example: SalesForce .
13
Services Delivered in each Model
.
14
Cloud Deployment Models
.
15
Virtualization – the Backbone of Cloud Computing
16
Virtualization The division of a single physical server into multiple “virtual” servers is the backbone of Cloud Computing as it allows for far greater flexibility and resource utilization. Virtualization also saves electric power, space and cooling since the number of physical server machines running is greatly reduced. Source: escope.net
17
Virtualization (contd.)
Hardware OS App Hypervisor Virtualized Stack Virtualization (contd.) To build clouds we need to aggregate large amounts of computing, storage, and networking resources in a virtualized manner. Virtual Machines (VMs) The VM is built with virtual resources managed by a guest OS to run a specific application. Between the VMs and the host platform, a middleware layer (called the Virtual Machine Monitor (VMM) or a hypervisor) is deployed. Two types of hypervisors Type 1 (bare metal) hypervisor and Type 2 (or hosted) hypervisor. This runs in the privileged mode. The guest OS could any OS such as Linux, Windows etc. They provide an almost native performance to the guest OSs (VMs), generally losing only 3–4% of the Central Processing Unit’s cycles to the running of the hypervisor.
18
Virtualization (contd.)
Virtual Machines and Virtualization Middleware Type 1 (bare metal) hypervisor Examples of the leading bare-metal hypervisors are VMware’s ESX(i) (proprietary), Citrix Xen Server (FOS (Free & Open Source)), and KVM (kernel loaded VM) (FOS). Type 2 (hosted) hypervisor Examples: VirtualBox (FOS) and Qemu (FOS) ESX and XenServer are installs that reside directly on the hardware. KVM sits within a Linux kernel. Hosted is often used by IT workers who need the flexibility to install, run and try out different OSs on their own computers without disrupting their current computing environment. Some examples of the leading hosted hypervisors are VirtualBox (FOS) and Qemu (FOS) Many VMs can be run on a hypervisor. The resource most in demand is system memory, and because RAM is cheap, this makes the proposition of virtualization an attractive one. A VM can be suspended and stored in secondary storage, resumed, or migrated from one hardware platform to another.
19
Cloud Economics
20
Cloud Computing: Economic Impacts
The NSF Report on Support for Cloud Computing submitted to Congress in February 2012 in response to the America Competes Reauthorization Act recognizes that "cloud computing is an area vital to the economic growth and competitiveness of the nation” [2]. A January 2012 report from the London School of Economics states, “cloud computing has a clear role in stimulating the economy and creating jobs” with jobs being created both within the datacenters hosting the clouds, the companies using cloud services, as well as a high start-up rate exemplified by smart-phone and mobile cloud services [3]. Real world estimates of server utilization in datacenters range from 5% to 20%. This may sound shockingly low, but it is consistent with the observation that for many services the peak workload exceeds the average by factors of 2 to 10.
21
Cloud Economics: CapEx to OpEx
Economic appeal of Cloud Computing: “converting capital expenses to operating expenses (CapEx to OpEx)” or “pay as you go”. Hours purchased can be distributed non-uniformly in time. An extremely important Cloud Computing economic benefit has to do with elasticity and transference of risk, especially the risks of over-provisioning (underutilization) and under-provisioning (saturation). Hours purchased via Cloud Computing can be distributed non-uniformly in time (e.g., use 100 server-hours today and no server-hours tomorrow, and still pay only for what you use); Even though Amazon’s pay-as-you-go pricing (for example) could be more expensive than buying and depreciating a comparable server over the same period, the cost is outweighed by the extremely important Cloud Computing economic benefits of elasticity and transference of risk, especially the risks of over-provisioning (underutilization) and under-provisioning (saturation).
22
Elasticity: Shifting the Risk
Cloud Computing’s ability to add or remove resources at a fine grain (one server at a time with Amazon’s EC2) and with a lead time of minutes rather than weeks allows matching resources to workload much more closely. Real world estimates of server utilization in datacenters range from 5% to 20%. Users provision for the peak and allow the resources to remain idle at nonpeak times. The more pronounced the variation, the more the waste. Even if peak load can be correctly anticipated, without elasticity we waste resources during nonpeak times. Real world estimates of server utilization in datacenters range from 5% to 20%. This may sound shockingly low, but it is consistent with the observation that for many services the peak workload exceeds the average by factors of 2 to 10.
23
Elasticity: Shifting the Risk (contd.)
Pay by use instead of provisioning for the peak (leads to underutilization of resources) Even if service operators predict the spike sizes correctly, capacity is wasted; and if they overestimate the spike they provision for, it’s even worse. Service peak daily demand at noon requires 500 servers and service tough daily demand requires 100 servers. The average utilization over a whole day is 300 servers. So the actual utilization over the whole day is 300 * 24 = 7200 server-hours. But since we must provision to the peak of 500 servers, we pay for 500 * 24 = server-hours, a factor of 1.7 more than what is needed. In fact, the above example underestimates the benefits of elasticity, because in addition to simple diurnal patterns, most nontrivial services also experience seasonal or other periodic demand variation (e.g., e-commerce peaks in December and photo sharing sites peak after holidays) as well as some unexpected demand bursts due to external events (e.g., news events). Since it can take weeks to acquire and rack new equipment, the only way to handle such spikes is to provision for them in advance.
24
Elasticity: Shifting the Risk (contd.)
Service operators also underestimate the spike, however, accidentally turning away excess users. Not only do rejected users generate zero revenue, they may never come back due to poor service. Users will desert an under-provisioned service until the peak user load equals the datacenter’s usable capacity, at which point users again receive acceptable service, but with fewer potential users. While the monetary effects of over-provisioning are easily measured, those of under-provisioning are harder to measure yet potentially equally serious.
25
Cloud Computing Economic Impact
. switching to Gmail can be almost 80 times more energy efficient than running in-house . You’d have to watch YouTube for three straight days for our servers to consume the amount of energy required to manufacture, package and ship a single DVD. The servers needed to play one minute of YouTube consume about kwh of energy.
26
Cloud Provider Market Share
. switching to Gmail can be almost 80 times more energy efficient than running in-house . You’d have to watch YouTube for three straight days for our servers to consume the amount of energy required to manufacture, package and ship a single DVD. The servers needed to play one minute of YouTube consume about kwh of energy.
27
Cloud Computing Trends
Rise in revenues for cloud providers 80% of organizations are predicted to migrate toward the cloud, hosting, and colocation services by 2025 (Computerworld). Cloud expenses are expected to amount to 70% of all tech spending by 2020 (Trend Micro). Focus on cloud security Enterprises are shifting intricate workload on the cloud. This paves the way for aspects like cloud workload security, threat intelligence, and encryption. Hybrid Cloud Just under 50% of enterprise are predicted to adopt a hybrid cloud model to 2020 to provide a seamless experience. AI and ML in the cloud about 67% of IT professionals are of the opinion that AI and ML will be the driving force for cloud adoption in 2020. Serverless Cloud Computing Gartner predicts that more than 20% of global enterprises will adopt serverless computing technologies (such as AWS Lambda) in 2020 which is a significant increase from the 5% mark of 2018. . switching to Gmail can be almost 80 times more energy efficient than running in-house . You’d have to watch YouTube for three straight days for our servers to consume the amount of energy required to manufacture, package and ship a single DVD. The servers needed to play one minute of YouTube consume about kwh of energy.
28
Cloud Computing Security
29
Security is the Major Issue (2008)
Cloud computing often leverages: Massive scale Homogeneity Virtualization Low cost software Geographic distribution Advanced security technologies .
30
Cloud Challenges are Changing (2016)
. For the longest time, security was the number one voiced cloud challenge. In 2016 however, lack of resources/expertise inched ahead. Organizations are increasingly placing more workloads in the cloud while cloud technologies continue to rapidly advance. Due to these factors organizations are having a hard time keeping up with the tools. Also, the need for expertise continues to grow. Luckily, many common tasks performed by these specialists can be automated. To this end companies are turning to DevOps tools, like Chef and Puppet, to perform tasks like monitoring usage patterns of resources and automated backups at predefined time periods. These tools also help optimize the cloud for cost, governance, and security. Headlines highlighting data breaches, compromised credentials and broken authentication, hacked interfaces and APIs, account hijacking haven’t helped alleviate concerns. All of this makes trusting sensitive and proprietary data to a third party hard to stomach for some. Luckily as cloud providers and users, mature security capabilities are constantly improving. To ensure your organization’s privacy and security is intact, verify the SaaS provider has secure user identity management, authentication and access control mechanisms in place. Also, check which data security and privacy laws they are subject to. The on-demand and scalable nature of cloud computing services makes it some times difficult to define and project quantities and costs. Luckily there are several ways to keep cloud costs in check including. Proper IT governance should ensure IT assets are implemented and used according to agreed-upon policies and procedures; ensure that these assets are properly controlled and maintained; and ensure that these assets are supporting your organization’s strategy and business goals. In today’s cloud based world, IT does not always have full control over the provisioning, de-provisioning and operations of infrastructure. This has increased the difficulty for IT to provide the governance, compliance and risk management required. To mitigate the various risks and uncertainties in transitioning to the cloud, IT must adapt its traditional IT governance and control processes to include the cloud. The performance of the organization’s BI and other cloud based systems is also tied to the performance of the cloud provider when it falters. When your provider is down, you are also down. This isn’t uncommon, over the past couple of years all the big cloud players have experienced outages. Make sure your provider has the right processes in place and that they will alert you if there is ever an issue. For the data driven organization real time data is imperative. With an inherent lack of control that comes with cloud computing, companies may run into real time monitoring issues. Make sure your SaaS provider has real time monitoring policies in place to help mitigate these issues. To make the best out of it, you should take a strategic iterative approach to implementation, explore hybrid cloud solutions, involve business and IT teams, invest in a CIO, and choose the right BI SaaS partner.
31
Cloud computing: Must-have Enterprise Technology (2018)
. New private cloud, public cloud, and SaaS innovations accelerated enterprise transformation everywhere in 2018.
32
Analyzing Cloud Security
Some key issues: trust, multi-tenancy, encryption, compliance Cloud security has both advantages and challenges
33
Cloud Security Advantages
Shifting public data to a external cloud reduces the exposure of the internal sensitive data Dedicated Security Team Greater Investment in Security Infrastructure Cloud homogeneity makes security auditing/testing simpler Clouds enabled automated security management and real-time detection of system tampering Rapid Re-Constitution of Services Redundancy / Disaster Recovery .
34
Cloud Security Challenges
Trusting vendor’s security model Multi-tenancy Data ownership issues Attraction to hackers (high-value target) Security of virtual instances in the cloud Obtaining support from cloud vendor for security related investigations
35
Cloud Security Challenges (contd.)
Data dispersal and international privacy laws EU Data Protection Directive and U.S. Safe Harbor program Exposure of data to foreign government and data subpoenas Proprietary cloud vendor implementations can’t be examined Loss of physical control Possibility for massive outages Encryption needs for cloud computing Encrypting access to the cloud resource control interface Encrypting administrative access to virtual instances Encrypting access to applications Encrypting application data at rest . Safe Harbor to bridge the differences in approach and provide a streamlined means for U.S. organizations to comply with the EU Directive The European Commission’s Directive on Data Protection went into effect in October of 1998, and would prohibit the transfer of personal data to non-European Union countries that do not meet the European Union (EU) “adequacy” standard for privacy protection. While the United States and the EU share the goal of enhancing privacy protection for their citizens, the United States takes a different approach to privacy from that taken by the EU. In order to bridge these differences in approach and provide a streamlined means for U.S. organizations to comply with the Directive, the U.S. Department of Commerce in consultation with the European Commission developed a "Safe Harbor" framework and this website to provide the information an organization would need to evaluate – and then join – the U.S.-EU Safe Harbor program. The U.S. Department of Commerce in consultation with the Federal Data Protection and Information Commissioner of Switzerland developed a separate "Safe Harbor" framework to bridge the differences between the two countries’ approaches to privacy and provide a streamlined means for U.S. organizations to comply with Swiss data protection law
36
Typical use case of provisioning a virtual machine
. .
37
Typical use case of provisioning a virtual machine
The management environment consists of components required to effectively deliver services to consumers. The various services offered span from image management and provisioning of machines to billing, accounting, metering, and more. The cloud management system (CMS) forms the heart of the management environment along with the hardware components. The managed environment is composed of physical servers and in turn the virtual servers that are “managed-by” the management environment. The servers in the managed environment belong to a customer pool; where customers or users can create virtual servers on-demand and scale up/down as needed. The management environment controls and processes all incoming requests to create, destroy, manage, and monitor virtual machines and storage devices. In the context of a public cloud, the users get direct access to the VMs created in the managed environment, through the Internet. They can access the machines after they are provisioned by the management layer. .
38
Typical use case of provisioning a virtual machine
The previous figure describes the following actions: User makes a request to create a VM by logging onto the cloud portal. The request is intercepted by the request manager and is forwarded to the management environment. The management environment, on receiving the request, interprets it and applies to it provisioning logic to create a VM from the set of available physical servers. External storage is attached to the VM from a storage area network (SAN) store during provisioning in addition to the local storage. After the VM is provisioned and ready to use, the user is notified of this information and finally gains total control of the VM. The user can access this VM through the public Internet because the VM has a public IP address (e.g. through SSH). . ISDM, IBM System Director and blade servers are shown to depict the components of a cloud system.
39
Cloud Ecosystem Figure. The cloud ecosystem for building private clouds. (a) Cloud consumers need flexible infrastructure on demand. (b) Cloud management provides remote and secure interfaces for creating, controlling, and monitoring virtualized resources on an infrastructure-as-a-service cloud. (c) Virtual infrastructure (VI) management provides primitives to schedule and manage VMs across multiple physical hosts. (d) VM managers provide simple primitives (start, stop, suspend) to manage VMs on a single host. Figure from Virtual Infrastructure Management in Private and Hybrid Clouds, Internet Computing, September 2009. .
40
Cloud Ecosystem The public cloud ecosystem has evolved around providers, users, and technologies. The previous figure suggests one possible ecosystem for private clouds. There are 4 levels of development of ecosystem development: cloud users/consumers, cloud management, VI management, and VM managers. At the cloud management level, the cloud manager provides virtualized resources over an IaaS platform. At the virtual infrastructure (VI) management level, the manager allocates VMs over multiple server clusters. Examples: OpenNebula, VMWare vSphere. These can manage VM managers like Xen, KVM etc. These support dynamic placement and VM management on a pool of physical resources, automatic load balancing, server consolidation, and dynamic infrastructure resizing and partitioning. Finally, at the VM management level the VM managers handles VMs installed on individual host machines. Examples: Xen, VMWare, KVM. An ecosystem of cloud tools attempts to span both cloud management and VI management. Besides public clouds such as Amazon EC2, open source cloud tools for virtualization of cloud infrastructure include Eucalyptus and Globus Nimbus. To access these cloud tools, one can use the Amazon EC2WS interface among others. .
41
Cloud Case Study .
42
Amazon Cloud: EC2 Amazon cloud components Elastic Compute Cloud (EC2)
Some Components of EC2: Amazon Machine Images and Instances, Regions and Availability Zones, Storage. Simple Storage Service (S3) . Amazon S3 Functionality Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited. Each object is stored in a bucket and retrieved via a unique, developer-assigned key. A bucket can be stored in one of several Regions. You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements. Objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU. Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users. Options for secure data upload/download and encryption of data at rest are provided for additional data protection. Uses standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.
43
Amazon Cloud EC2: AMI An Amazon Machine Image (AMI) is a template that contains a software configuration (operating system, web/application server, and applications). From an AMI, one can launch instances, which are running copies of the AMI. Multiple instances of an AMI can be launched. From a single AMI, one can launch different types of instances. Amazon publishes many AMIs that contain common software configurations for public use. In addition, members of the AWS developer community have published their own custom AMIs. You can also create your own custom AMI or AMIs; doing so enables you to quickly and easily start new instances that have everything you need. For example, if your application is a web site or web service, your AMI could include a web server, the associated static content, and the code for the dynamic pages. As a result, after you launch an instance from this AMI, your web server starts, and your application is ready to accept requests. You can launch different types of instances from a single AMI. An instance type essentially determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities. Select an instance type based on the amount of memory and computing power that you need for the application or software that you plan to run on the instance. For more information, see Available Instance Types. You can launch multiple instances from an AMI, as shown in the following figure. Your instances keep running until you stop or you terminate them, or until they fail. If an instance fails, you can launch a new one from the AMI.
44
Amazon Cloud EC2: AMI An instance type is essentially a hardware archetype. The user selects a particular instance type based on the amount of memory and computing power needed for the application or software that the user plans to run on the instance. .
45
Amazon Cloud EC2: Regions and Availability Zones
Amazon has data centers in different areas of the world or Regions (for example, North America, Europe, and Asia). By launching instances in separate Regions, the application can be closer to specific customers or meet legal or other requirements. Each Region contains multiple distinct locations called Availability Zones. Each Availability Zone is engineered to be isolated from failures in other Availability zones and to provide inexpensive, low-latency network connectivity to other zones in the same Region. . By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location.
46
Amazon Cloud EC2: Storage
To store data, Amazon EC2 offers the following storage options: Amazon Elastic Block Store (Amazon EBS) Amazon EC2 Instance Store Amazon Simple Storage Service (Amazon S3) Amazon EBS Amazon EBS provides instances with persistent, block-level storage. Amazon EBS volumes are essentially hard disks that you can attach to a running instance. Amazon EBS is particularly suited for applications that require a database, file system, or access to raw block-level storage. . EBS most common application use this. You can attach multiple volumes to an instance.
47
Amazon Cloud EC2: Storage
To keep a back-up copy a snapshot of the volume can be created, which can be stored in Amazon S3. Amazon S3 is storage for the Internet. It provides a simple web service interface that enables storage and retrieval of any amount of data from anywhere on the web. A new Amazon EBS volume can be created from a snapshot, and attached to another instance. . 9.5 cents a TB. Each object can hold 5 TB with an unlimited number of objects. There is pricing till > 5000 TB (or 5 PB).
48
Amazon Cloud EC2: Storage
Instance Store All instance types, with the exception of Micro instances, offer instance store. This is storage that doesn't persist if the instance is stopped or terminated. Instance store is an option for inexpensive temporary storage and can be used if data persistence is not required.
49
Amazon Cloud: Networking and Security
50
References [1] NIST Definition of Cloud Computing. [2] NSF Report on Support for Cloud Computing, February [3] Etro, F., The Economic Impact of Cloud Computing on Business Creation in Europe. [4] Federal Cloud Computing Strategy, February 2011. Strategy1.pdf [5] IDC: Cloud Computing Will Create 14 Million New Jobs by [6] LTE: Change Is In The Air. [7] Report – Mobile Devices to Surpass PCs in Web Access by [8] How Big Will Cloud Computing Revenues be in 2016? [9] New Forecast: Emerging Markets Lead World in Social Networking Growth.
51
Choosing the Cloud is an Easy Decision …
[Source: dilbert.com]
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.