Presentation is loading. Please wait.

Presentation is loading. Please wait.

Understanding Virtualization

Similar presentations


Presentation on theme: "Understanding Virtualization"— Presentation transcript:

1 Understanding Virtualization
Introduction

2 Introduction Basic computing is always bound by its limitations, usually physical. You can only store as much data as you have capacity to hold the data. The number of nodes data must pass through on the network will slow down the transmission speed. The bandwidth of the connection will restrict how much data can be passed. While these limitations have been present, computer technologies are constantly striving to break through these limitations. While some achievements are simple, such as adopting fiber optics to expand the bandwidth, other achievements seem complex. Virtualization is a technique used to overcome limitations across multiple aspects of computing. While it may seem complex in implementation, virtualization is actually simple in concept.

3 What is Virtualization?
Virtualization, in its broadest sense, is the emulation of one or more workstations/servers within a single physical computer. It is the emulation of hardware within a software platform. READ This type of virtualization is sometimes referred to as full virtualization as it allows one physical computer to share its resources across a multitude of environments. This means that a single computer can essentially take the role of multiple computers. The desktop environment for Windows is a simple example of the virtualization concept from a user perspective. First, multiple users can have access to the same computer with different logons, which results in different access to items on the computer, preferences such as wallpapers and themes, and different desktop. The Desktop itself is an emulation of files (in the form of shortcuts) on the computer. On a server, virtualization allows several virtual machines to operate simultaneously, thus allowing multiple users to use server resources at the same time without creating a conflict between the users themselves.

4 What is Virtualization?
A simplified timeline of the emergence of virtualized technology since the IBM mainframe in 1960. Virtualization is not only limited to the simulation of entire machines. 1960 1980’s 1999 IBM Mainframe X86 PC - LAN VMWare For X86 PC’s Although virtualization technology has been around for many years and can provide a wide range of benefits to users, IT professionals, large businesses and government organizations, its many benefits are only now starting to be realized. Here, we can see a simplified timeline of the emergence of virtualized technology. The concept of virtualization was first realized in the 1960s. It was implemented by IBM to help organize their massive mainframe machines into separate ‘virtual machines’ and maximize their available mainframe computer’s efficiency. Before virtualization was introduced, a mainframe could only work on one process at a time and therefore resources were wasted. In theory, any computer resource can be virtualized. For example, the virtual memory process enables data that may be scattered across a computer’s RAM and hard drive to appear as if stored side by side. Computer resources are generally categorized as storage capacity, processor (computing), and network resources. Here are some examples of virtualization in practice: Storage resources – RAM, virtual memory, hard drive partitions, distributed file systems, hypervisors, data/database virtualization, RAID Computing resources – virtual machines, desktops, applications (web applications), parallel processing Network resources – virtual private networks (VPNs), IP addressing, web addressing

5 Growth The massive growth in the use of computer technology
created new IT demands as well as problems. READ Some of these problems included: Low hardware infrastructure utilization Rising physical infrastructure costs Rising IT management costs Insufficient disaster protection High maintenance and costly end-user desktops Requirement for multiple environments The most viable solution to resolve these types of issues was hardware virtualization. In 1999, VMware introduced their first virtualization application for x86 based systems. Virtualization is still a means of abstracting the physical properties of computing hardware. Modern day machines can now allocate different hardware resources separately, leading to greater efficiencies in processing.

6 How does Virtualization work?
84% 84% 70% 53% 56% 45% So what is virtualization really? Because virtualization is used under many different capacities, this answer is not simple. The next four slides will show you different scenarios where virtualization is used. The first scenario focuses on basic storage virtualization: Imagine you have three separate storage containers (drives) of 100 terabytes each. Each container is used for a specific function of application, and because you have planned for potential growth, there’s more than enough storage capacity available. Unfortunately, the capacity is considered underutilized. With virtualization, the capacity of the three drives can be abstracted, combined, and manipulated on a virtual layer. A single 300TB container can be created which is 56% utilized or a physical drive can be removed and functions can be serviced from two drives instead of three. Cloud computing relies heavily on virtualization because a single storage drive can be virtually separated into several smaller containers. A 100TB container may be used to create a hundred 10 gigabyte containers. 100TB Volumes 300TB Volume 100TB Volumes Copyright: The Art of Service 2008

7 How does Virtualization work?
RESOURCE Virtual Resource Interface User ATTACK Virtual Resource The second scenario focuses on basic security virtualization: A resource, like a server, is abstracted into its virtual counterparts. All virtual resources can be accessed equally from the network. However, a firewall or interface identifies some form of unauthorized access or malicious attack which cannot be entirely prevented. From a security perspective, stopping the attack may eliminate the flow of information needed to investigate and prosecute, so the desired countermeasure is to isolate the effects. With virtualization, this can be easily done by directing the attack to specific virtual resources that are isolated by the user population, while maintaining other virtual resources to manage the normal operations of the server. These types of reactions are often done automatically in a matter of milliseconds, based on the rules and configurations of the solution. Imagine you are working with data from an application on your desktop computer. The data you are using is stored on a server. The distance between your computer and the server will create some delay in processing as the data must travel that distance. In addition, each node, such as a router, bridge, gateway, etc., on the network will delay the processing even further as each node does its proper operations on the data packet. To reduce this delay without virtualization, you would have to be working directly on the server. With virtualization, a copy of the data you are working can be temporarily copied to your desktop computer. Data transmissions may still occur between the desktop and the server, but a resident copy of most of the used data will improve processing speed and reduce most of the communication on the network. Another potential delay in this scenario is a loss of connection between your computer and the stored data. A preventative step to this situation is by replicating the data across several servers, so if one server fails, another takes over as if nothing happened. Flow of Information Copyright: The Art of Service 2008

8 How does Virtualization work?
Processor A DATA Processing Manager Processor B Processor C The last scenario focuses on basic processing virtualization in the form of parallel processing: Imagine an activity where you need to process data in some form. Like most tasks, the time required to complete the task is dependent on the number of resources available. If you have four processors dedicated to the task, the time required is four times faster than if only one processor was dedicated to the task. In most tasks, four processors do not need to process the same data, but can focus on their assigned allotments of data. Data allotments are managed through a central managing agent, which is responsible for assigning data, tracking the processing to completion, and combining the results from each of the processors to create aggregated result for the user. The processing manager will “virtualize” the data being sent to each of the processes. While parallel processing can be performed on a single computer, technologies like Big Data will perform these types of activities using thousands of processors, both physical and virtual. The three scenarios provided do not describe all the methods where virtualization is used in IT technologies, though they cover the most prevalent methods used. Processor D Copyright: The Art of Service 2008

9 Virtualization – Goals & Objectives
Increase usage of hardware resources Reduce management of resource costs Improve business flexibility There are four main objectives to virtualization. They include: Improve security and reduce downtime

10 Virtualization – Some Benefits
Easier manageability Elimination of compatibility issues Virtualization offers many benefits for businesses, and some of these benefits are: Easier Manageability – Using virtualization, administrators can both monitor and manage entire groups of servers/workstations from a single physical machine. Elimination of compatibility issues – In the past, running MAC OS X, Linux, or Windows on the same machine created many compatibility issues. These days, using virtual machines, many different operating systems and applications can run on a single physical machine without affecting one another. Fault isolation – Any kind of error within a virtual machine will not affect any other virtual machines. Problems are automatically isolated, which can then be fixed or looked into by an administrator, while all other systems and services continue operating. Increased security – Administrators can separate information and applications on a single physical machine into different virtual machines. This prevents users from being able to access or view what they should not be able to. Fault isolation Increased security

11 Virtualization – Even More Benefits
More efficient use of resources Portability Problem-free testing Rapid deployment More efficient use of resources – Many virtual machines can run on a single physical machine, utilizing that physical machine’s resources much more efficiently than if it was just running a single service or application. Portability – Again, virtual machine data is stored in files on a physical machine. This means that virtual machines can be transferred effortlessly from one physical machine to another, without any changes in functionality whatsoever. Problem-free testing – One or more virtual machines can be setup as test machines. These can then be used to test the stability of certain applications or programs without affecting the functionality of day-to-day business operations. Rapid deployment – A virtual machine’s hard drive is often represented as a single file on a physical machine. This means that this hard drive can easily be duplicated or transferred to other physical machines. By using one virtual machine as a ‘template’ machine, its virtual hard drive file can be used to rapidly create new virtual machine clones from it. The advantages of this are obvious because an administrator would only have to carry out an OS installation once. Reduced costs – Costs are reduced in the form of less physical hardware, less power and cooling requirements, less physical space, and less staffing requirements. The ability to separate applications – Services and applications that may conflict with one another can be installed and can run on separate virtual machines. Because these services and applications are still running on the same physical machine, resources and processing power are not wasted. Reduced costs Ability to separate applications

12 Considerations Virtualization is usually a technique applied in larger solutions, such as: Storage Parallel Processing Cloud Computing Big Data Mobile Computing How virtualization is applied will reap different benefits. The value of virtualization has grown tremendously over the years that many existing and emerging technologies have started to adopt its concepts. Many solutions, such as dual or quad CPUs, make the technique transparent to the user, but require the technician to understand its specifications. As demonstrated in the previous scenarios, different approaches to virtualization will provide different benefits. Often, these approaches will be combined to provide a more comprehensive solution with more benefits. When considering a virtualization solution, ensure that you understand what will be virtualized, for what purposes, and what the expected benefits of the solution are. Many solutions will have suggestions for configurations, and interfaces with other technologies should be considered. It is important to understand these nuances when researching a specific solution. Most virtualization activities are performed in a split second based on established rules and scripts. These rules and scripts are a product of well-defined processes, objectives, and measured constraints—all of which require a respectable level of maturity in IT operations. Copyright: The Art of Service 2008

13 Standards and Guidelines
Several standards of interest in virtualization are: Open Virtualization Format (OVF) PCI DSS Virtualization Guidelines NIST SP Full Virtualization Security Guidelines SNIA Standards and Education Open Virtualization Format (OVF) is a product of Distributed Management Task Force’s (DMTF) Virtualization Management (VMAN) standard designed to provide a standard format for describing and packaging virtual appliances (machines and applications) across different platforms. A consistent format will ensure greater capabilities in interoperability and portability of applications across operating systems, and hardware platforms. The last version of this standard was published in December 2012. Virtualization Guidelines to PCI (Payment Card Industry) Data Security Standard defines the risks and recommendations for using virtualized technologies in cardholder data environments (CDE). Therefore, these guidelines would be advantageous for any industry or business that plans to distribute, manage, or accept payment cards, such as credit cards, debit cards, and gift cards. Version 1.0 of these guidelines was published in June 2011. NIST (National Institute of Standards and Technology) SP (Special Publication) was published in February 2011 to provide consistent guidelines for full virtualization technologies. Full virtualization refers to the complete emulation of a computer’s underlying hardware which allows a software package to run without modification. The intended audience for NIST SP includes system administrators and security engineers/program managers. Storage Network Industry Association (SNIA) does not have an explicit standard for virtualization but provides rules for using virtualization within larger areas of interests, such as cloud storage, analytics and big data, network storage, solid state storage, storage security, and storage management. Their website also provides relevant education materials to support the architect and/or technician with any storage-related virtualization questions. Copyright: The Art of Service 2008

14 Hypervisor The hypervisor is the workhorse of full virtualization. It performs five key activities: Partitioning Isolation Encapsulation Load Balancing Fault Tolerance Partitioning: It is the act of separating the available resources. The hypervisor enables the resources to be assigned and reassigned dynamically, expanding and contracting the capabilities of and for a virtual machine or an application running on the virtual machine. The resources partitioned can be physical or actualized. Isolation: Each virtual machine is isolated from its host physical system and other virtualized machines. In a traditional application hosting environment, each application is hosted on a dedicated service. In a virtualized environment, each application is hosted on a dedicated virtual machine. Encapsulation: A virtual machine can be represented as a single file, which in turn, represents the services that virtual machine provides. This is very important for processes designed using SOA because each virtual machine can be dedicated to a specific component (or application) of the process. Isolation protects each component from failure in other components. Load Balancing: In a traditional server environment, when the demand on a server reached a critical state, additional server resources were utilized to handle the excess demand. While this may be a common theme for why load balancing would be used, the virtualization technology is much more proactive, allowing service requests for resources to be analyzed using a scheduling algorithm and distributed across several resources. Within a virtual infrastructure, such as a cloud, load balancing enables redundancy and increase reliability in the infrastructure. Fault Tolerance: A key benefit of the first four activities mentioned is the ability to overcome failures to any part of the system. Called fault tolerance, the hypervisor relies on all of its capabilities to ensure that the impact of any service disruption is minimized as much as possible. The success in this area is the reason virtualization is used heavily in disaster recovery solutions. It’s best to look at an (oversimplified) example for this: The actualized layer is where each virtual machine resides and is created using the hypervisor that is installed on every hardware device. One of those hardware devices fails and, because the virtual machine exists within the actualized layer, what occurs is that the resources provided by the hardware device are no longer available. Isolation ensures that what happens to the actual hardware remains restricted to the hardware and will not impact the operation of the virtual machines. Encapsulation still ensures that the services required by the virtual machines are available. In addition, load balancing ensures that any resources lost are redirected to other resources from the rest of the hardware infrastructure. The result is that the hardware device drops out of the infrastructure without much impact to operation of the virtualized systems.

15 Moving to Virtualization
Define what virtualization technologies will be used. Evaluate the risks associated with desired virtualization technologies. Understand the impact to computing environment. Secure physical assets and hypervisor. Isolate/restrict admin access and functions. Each virtualization technology will require different care in implementation and maintenance. However, the five steps presented here provide a good general sense of what should be done when adopting or expanding your virtualization capabilities. An important aspect of virtualization, both in preparing for a technology and leveraging the technology, is the ability to secure different components, assets, and functions. Understanding the different layers and how they work together will provide the best knowledge in planning the use of virtual systems. Copyright: The Art of Service 2008

16 Managing Processes The following service management processes would most likely be impacted by the use of virtualization: Capacity Management Availability Management IT Continuity Management Configuration Management Copyright: The Art of Service 2008

17 Understanding Cloud Computing
Introduction

18 Introduction Cloud computing is a technology available because of the advancements of other technologies, such as Web 2.0, virtualization, and service-oriented architectures. Cloud computing encompasses any web-based on-demand utility service. The scope and opportunity of cloud computing is still being defined, though some recognized characteristics have become prevalent.

19 What is Cloud Computing?
Cloud computing sounds simple enough: you open your web browser and everything you need for home, work, and fun is right at your fingertips. The problem is, like spending the day with your friends gazing at clouds in the sky, each cloud seems to take on a different shape and is always changing. The other problem is nearly everything used to describe cloud computing is a technology or framework that has existed for several years. Therefore, it is hard to say that cloud computing is really a new approach to providing IT services. The underlying principles for cloud computing can be traced back to the 1960s. As a concept, cloud computing represents a paradigm shift on how systems are deployed. As a technology, cloud computing refers to the applications and services running on a distributed network using virtualized resources and accessed using Internet protocols and networking standards. The key defining features of the cloud—the infrastructure—is entirely virtual, invisible to the user, potentially located anywhere in the world and requires no client installations or special hardware. In the cloud, you have access to necessary infrastructure, but the usual burdens of ownership, administration, maintenance, and operation of hardware and software fall to the cloud provider, not the end user.

20 NIST Definition of Cloud Computing
Cloud Computing encompasses: Four Deployment Methods Three Service Models Five Essential Characteristics In September 2011, the National Institute of Standards and Technology (NIST) published their NIST Definition of Cloud Computing (NIST Special Publication ) after several years of discussion. The definition provided a cloud model consisting of: 4 Deployment Methods: Private, Community, Public, and Hybrid Clouds 3 Service Models: IaaS, PaaS, and SaaS 5 Characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service The NIST Definition has become the single point of reference in cloud computing to define what is and is not acceptable as a cloud service.

21 Deployment Models Public Clouds Community Clouds Private Clouds
Hybrid Clouds The NIST definition identifies four deployment models: public, community, private, and hybrid clouds. Public clouds are designed for open use by the general public. The public cloud exists on the premises of a business, academic, government organization that owns, manages, and operates the offering. Community clouds are designed for exclusive use by a specific community of users from organizations with shared missions or concerns. One or more of the organizations may own, manage, and operate the offering or it may be provided by a third party provider or a combination of the two. The solution may exist on or off the premises. Private clouds are designed for exclusive use by a single organization comprising multiple consumers. The offering may be owned, managed, or operated by the organization, a third party, or a combination of the two. The solution may be found on or off the premises. Hybrid clouds are a combination of public, community, and private clouds, bound together using standardized or proprietary technology for data and application portability. A hybrid cloud must be a single entity that remains unique. The Jericho Forum, a subset of the Open Group, has an expanded model of cloud deployments as part of their current focus, “Securely Collaborating in Clouds”. The cloud model provides four dimensions for distinguishing how clouds are formed and their manner for provisioning services. The four dimensions are: Internal/External: defines the physical location of the data, either within an organization’s boundaries or outside the organization’s boundaries (on or off premise). Proprietary/Open: defines the state of ownership of the technology used—the means for provisioning cloud services is owned and controlled by the service provider or not. This dimension impacts the portability of solution (the more proprietary the provision, the harder it will be to move from one cloud solution to another). Perimeterized/De-perimeterized architectures: defines whether the cloud operates within the traditional IT security perimeter or outside that perimeter. Insourced/Outsourced: the cloud service is provided by an organization’s IT department exclusively or by a third party. An insourced cloud is typically an internal, proprietary, and perimeterized solution and represents the majority of private clouds. An outsourced cloud is typically an external, open, de-perimeterized solution and clearly represents an NIST-defined public cloud. (Cloud Cube Model ver 1.0, Jericho Forum: April 2009) Both deployment models and the Jericho dimensions provide evidence about the flexibility and diversity surrounding how clouds are provided, especially in relationship to a business’ current IT infrastructure.

22 Service Models Infrastructure as a Service Platform as a Service
Software as a Service There are three service models defined by NIST: Infrastructure as a Service – The customer is provided the processing, storage, network, and fundamental resources capable to enable the deployment and operation of arbitrary software, including operating systems and applications. The underlying infrastructure is managed and controlled by the service provider, while the customer has control of everything running on top of the infrastructure. Platform as a Service – The customer is provided the capabilities to deploy customer-created or acquired applications into the cloud infrastructure using provider-supported programming languages, libraries, services, and tools. The underlying infrastructure is managed and controlled by the service provider, but the customer has control over the deployed applications and limited control over application-hosting configurations. Software as a Service – The customer is provided the use of the provider’s application running on the cloud infrastructure. The application is accessible through a thin client, such as a web browser, or a program interface. The underlying infrastructure and applications are managed and controlled by the provider. The customer may have limited control over user-specific application configuration settings. (Diagram from Security Guidance for Critical Areas of Focus in Cloud computing v.2.1, Cloud Security Alliance: December 2009)

23 Essential Characteristics
On-demand self-service Broad network access Resource pooling Rapid elasticity Measured service The NIST definition establishes five essential characteristics for cloud computing. Any specific deployment model, service model, or solution may elaborate on additional characteristics, but these five must always be present. On-demand self-service: The customer is able to obtain and expand computing capabilities automatically without any interaction with the service provider. Broad network access: All service capabilities are available over the network and accessible through standard mechanisms using thin or thick client platforms, including workstations, laptops, tablets, and mobile phones. Resource pooling: Physical and virtual resources are pooled to serve multiple consumers within a multi-tenant model and are dynamically assigned according to customer demand. The customer will generally have no control or knowledge over the exact location of the resources provided, though a location may be specified at a higher level of abstraction, such as country or state. Rapid elasticity: Capabilities are provided and released elastically, sometimes through automation, to scale according to demand. The customer perception of service may be such that capabilities are unlimited and appropriate in any quantity at any time. Measured service: Resource usage is automatically and transparently monitored, controlled, reported on, and optimized through a metering capability at a level of abstraction appropriate to the service type.

24 Cloud Services SaaS— Software as a Service BaaS—Business as a Service OaaS—Organization as a Service DaaS—Data as a Service SaaS – Storage as a Service PaaS—Platform as a Service FaaS—Framework as a Service IDaaS— Identity as a Service IaaS – Infrastructure as a Service While the NIST definition provides only three service models, the cloud computing is incredibly broad, comprising software, service capacity, storage, middleware, virtualization, and security and management tools—all delivered as services. Some of the common lingos for cloud computing marketing in addition to the standard four letter acronyms are: Software as a Service – Software is made available to customers to use on a pay-as-you-go, subscription, or free basis. Web-based is the most prevalent of SaaS types and Google Apps is the best known suite of applications. Other applications like Salesforce and NetSuite are more advanced forms of SaaS. Business as a service – This takes the SaaS model one step further by allowing the service provider to manage the business processes behind the software. For instance, an application like Salesforce will provide software for customer relationship management, but not run the business office. With BaaS, the business office is being outsourced. Organization as a service – An expanded form of BaaS for shared services on the web, it is ideal for organizations that utilize IT but do not wish to create an internal IT department. Data as a service – This provides the searching, extracting, normalization, and sometimes analysis services related to data. Services are provided on a volume basis and/or data type. Microsoft Azure and MapQuest are the most prevalent DaaS providers. Storage as a Service – This provides customer cloud storage space to save data. Providers like Amazon Web Services, DropBox, and iCloud are popular versions of this cloud service. Platform as a service – This gives customers, namely independent software vendors, the development platform and resources to design, build, and test applications as well as host the applications on a web server. Salesforce’s force.com is the primary leader in this cloud service. Framework as a service – This takes PaaS to the next level by providing the additional resources for connectivity and interoperability. Frameworks are already present in many programming languages, but companies like Microsoft allow for cross-platform support to create greater expansion between multiple frameworks. Identity as a service – This provides the authentication services associated with application login as a single contained service. Today’s applications allow a person to log in to many application using already established usernames and passwords from social networking sites like Facebook and Twitter or well-established web tools, such as Google or Yahoo. Infrastructure as a Service – Sometimes indistinguishable from other cloud services, IaaS provides an abstracted hardware environment for business operations. Amazon Web Services is a leading provider in this area, with the ability to provide utility based computing, storage, and networking services to an individual or corporate customer. Some of these are somewhat ‘tongue in cheek’ but the important thing to remember is that the main focus is on SERVICE delivery. All the components are seen as a service or service component. This is a change in comparison to how internal IT organizations view these components. Very often, the focus is on the product, the tool, the box or piece of software—not the fact that you use these components to deliver a service to your customers and clients.

25 SaaS = Software-as-a-Service
What is SaaS? SaaS changes the way software applications are stored and accessed. Through SaaS, software applications and services can be accessed by remote users via the Internet. Traditionally, applications are installed and accessed on a host computer. When running the application, the host computer provides the required processing and memory resources. The customer owns and controls the application, typically requiring a license to allow use. The customer has full control over all application configurations, including features. With SaaS, the application is installed and accessed on the “Internet”, not the host computer. The application and underlying infrastructure is owned and controlled by the service provider and is accessed by the customer using a web browser or program interface. The customer may have limited control over user-specific configurations. In a transitional configuration, the customer must buy the entire application. However in a SaaS configuration, the customer will pay for use of the application, whether it is for a single-use or every day. From the customer’s perspective, they simply open their browser and access the application. SaaS = Software-as-a-Service

26 SaaS Checklist There are four basic points common to every SaaS implementation. Software applications or services are owned, delivered, and managed by a service provider. The location of the application and required resources are transparent to the user. Users can access these services or software applications using a web browser or program interface. The user only pays for the resources they use when accessing the application. The NIST definition provides a simple checklist for testing a SaaS implementation: Software applications or services are owned, delivered, and managed by a service provider. The location of the application and required resources are transparent to the user. Users can access these services or software applications using a web browser or program interface. The user only pays for the resources they use when accessing the application.

27 Customer-Oriented Services Business-Oriented Services
Categories of SaaS Customer-Oriented Services Software solutions to the individual, generally the public Software is offered on a subscription basis or offered free Example: Web-based services Business-Oriented Services Software solutions to companies or enterprises Software is offered on a subscription-basis and costs are attributed to actual usage Example: Product Mgmt. services or customer relations applications SaaS applications can be found on any cloud deployment, whether it is public or private, a community or hybrid. In their construction, SaaS applications are all the same because they must embrace the 5 essential characteristics of a cloud solution. Some distinct differences can be seen in the marketing and billing aspects of different SaaS applications. As a result, it is easy to see two categories of SaaS applications—customer-oriented services and business-oriented services. Customer-oriented SaaS are marketed to individuals, usually the general public, and are found on public or open cloud deployments. They are typically offered on a subscription-basis or offered free. Free software will generally enable advertising within the software. Some free SaaS applications will offer a premium subscription for a monthly fee or an in-product store to purchase additional features. A premium subscription will allows more features or eliminate advertising within the application. Many SaaS games will offer the in-product store. Business-oriented SaaS is marketed to groups of people. Gmail is a customer-oriented service, whereas Corporate Gmail is a business-oriented service. Both have the same features, though Corporate Gmail has added features to support collaboration. With a business-oriented SaaS solution, the software is offered in three cost models: a flat-fee for the specific number of users, a charge based on actual usage from all associated employees, or a combination of the two. The initial setup cost for SaaS is usually lower than purchasing traditional software, as businesses do not have to plan, provision, develop, or deploy the actual hardware or software to support the application. They simply can pay on a subscription basis for only those features or the level of usage that they require.

28 What is a Platform? A platform is a system that can be reprogrammed and customized by outside developers. An application is a system that cannot be reprogrammed by outside developers. Most people and companies tend to confuse the terms application and platform. The software-as-a-service (SaaS) offering hasn't helped in this matter. Marc Andreessen, the co-author of Mosaic and founder of Netscape Communications Corporation, attempted a distinction between application and platform. He suggested that “a platform is a system that can be reprogrammed and therefore customized by outside developers...and in that way, adapted to countless needs and niches that the platform's original developers could not have possibly contemplated.” He defines an application as “a system that cannot be reprogrammed by outside developers. It is a closed environment that does whatever its original developers intended it to do and nothing more.” For the most part, software developers plan, design, create, test, and deploy their software to users. Now these activities are crucial to the success of the application, but even more important is the platform where these activities take place.

29 Platform Layers Do It Yourself Managed Hosting Cloud Hosting
Cloud IDEs Cloud Application Builders When looking to create a platform for a software environment, there are many options to choose from. Without a clear definition of platform by the industry, the offerings can vary greatly in what they provide. For the enterprise looking to create a platform for their software development, they need to choose the right direction for themselves. Below are their options, as suggested by Phil Wainewright, an analyst on software industry trends. Do It Yourself This has always been an option for most enterprises. They own the servers, the network, and the software. The enterprise takes on the responsibility for designing, developing, deploying, and managing the application as well as the infrastructure the application sits on. Managed Hosting Similar to the Do-It-Yourself option but the responsibility of the infrastructure is shared with another party. The extent of that sharing varies depending on the relationship and the contract. Typically, the enterprise can choose the components of the infrastructure and some rules requiring its setup and of course, the cost of operating the infrastructure. The second party is responsible for designing, implementing, managing and improving the infrastructure, usually based on a set of Service Level Agreements (SLAs) and policies provided by the customer on how those SLAs should be met. Cloud Hosting This is the first layer of platform which is utility-based. The enterprise pays for the resources that their enterprise uses. Service Level Agreements may be in place, but the customer has little to no say in how those SLAs are met. They also relinquish choices in infrastructure and infrastructure design to the provider of the platform. The purpose of this service is to provide a platform for installing and running the enterprise's application. While the provider ensures that the infrastructure is working properly, it is the responsibility of the enterprise to ensure that the application is working properly.   Cloud IDEs To provide a greater offering than a simple hosting platform for applications, Cloud-based Integrated Development Environments (IDE) allows the enterprise customer a platform for developing and deploying their applications. The control of the infrastructure is, of course, the responsibility of the provider. The provider also provides the tools used in the development and collaboration effort. All the enterprise has to provide is the manpower. The downside is that development is tied to the infrastructure that the provider offers. If development doesn't work out, moving the application to a different platform is difficult. Cloud Application Builders The final layer of platforms is geared toward the power users and designers. The application infrastructure is in place, but the customers bring their own tools and developers to the platform. The only real constraint is that the type of application that can be developed depends on the infrastructure that is provided. This service falls in line with the IaaS service model, though the management of the platform running on top of the infrastructure may utilize concepts found in PaaS offerings. Each of these layers has its benefits and failings. As one moves through the layers, they lose more and more control over the infrastructure. As they lose control, the customer also loses the financial burden for implementing and maintaining that infrastructure. Additionally, the customers gain the ability to rapidly develop and deploy their application.

30 Cloud Development Stack
Metering and Analytics Native Cloud Application Cloud Applications Ported Cloud Application Administration Methodology tools Developer Services Analysis and Design Tools Build Tools and SDKs Workflow and Integration Tools Testing Tools Deployment Tools Software Development in the cloud is still a new frontier. Many PaaS service providers and software developers (as customers) struggle to find the right balance between what is needed and what is provided, which results in some confusion and diverse approaches to developing cloud-based applications and providing platform-based services. The most comprehensive picture of a software development platform on the cloud is from Saugatuck Technology, Inc. Their Cloud Development and Deployment Framework is pictured above and provides a model for use by independent software vendors (ISV) to visualize and understand the scope of requirements for a typical development and deployment of a cloud. The intent of the framework is to aid ISVs in qualifying potential cloud providers in their IaaS and PaaS offerings. Saugatuck Technology, Inc. is a research and management consulting company. The basic components of the framework are: Infrastructure Services – Provides the capabilities related to the hardware, software and associated infrastructure. In cloud terms, this is the bottom half of IaaS offerings and represents the physical aspects of the infrastructure. Middleware Services – Enables the communication and management of data between applications and the infrastructure usually through the deployment of APIs. This is the top half of an IaaS offering and serves, along with infrastructure services, as the foundation for all other offerings within the PaaS and SaaS service models. Developer Services – Provides the primary services needed to develop software in the cloud and includes the components, tools, languages, and libraries required in software development. This layer is the primary distinguisher between PaaS service providers and what they provide. Cloud Applications – Describes the applications developed and deployed in the Cloud; includes native cloud applications and ported cloud applications Metering and Analytics – Provides the tools and resources used to provide measuring and analysis of activity within each layer of the framework Administration – Provides the management capabilities required to successfully support the customer utilizing the framework Multi-tenancy Middleware Services Virtualization Security DBMS Access Storage Other Services Infrastructure Services

31 Why use Cloud Computing?
Financial Capital vs. operational expenditures Pay-as-you-go features Reduced IT management costs Technological Adoption of emerging technologies Rapid Scalability Access Anywhere Insurance against future Competitive Advantage Internal Transfer of risk Business without walls Better security Innovation Environmental Sharing resources Green IT The benefits of the cloud influence several areas of concern for the customer. Capital expenditures represent expanding the IT infrastructure: more servers, more network components, more applications, more licenses, more investments, and more tax burdens. Additionally, more money is spent in planning, purchasing, developing, testing, deploying, and managing the expanding infrastructure. To utilize a cloud solution, the business has no capital expenditure or up front investments. All expenditures are considered operational, which can increase or decrease based on business trends. Supporting a purely operational business requires a similar model in supporting the business. The elasticity inherent with cloud services enables more resources to be available when required and fewer resources when they are not. Charges and billing for these resources are based on their consumption during a given period. This is especially important for small- and medium-sized businesses that do not have their own economies of scale to reduce IT costs. Cloud providers are actively adopting, and even leading the development of, emerging technologies and standards. Their large scale operations allow them to take on and absorb risk without impacting the overall operations of the cloud. The result is a continual cycle of improvements that cannot be matched outside the same economy of scale. For a business, these improvements allow for faster performance, greater reliability, and better support for their business processes than they would typically be able to do on their own. The on-demand self-service and rapid elasticity characteristics of a cloud enable the customers to grow their business without having to worry about having the IT infrastructure in place. This is especially important for businesses that not only want to use SaaS applications but develop them as well. A PaaS, for example, will provide the best combination of programming languages, collaboration tools, development processes, and monitoring and testing methods to the customer. With the growth of mobile computing, portability has become a major concern for businesses, both in operating the business and connecting with their customers. Clouds can be accessed anywhere with a simple Internet connection. By default, all a customer needs is a web browser to access the cloud; however, some companies are creating interface applications to be used on tablets and mobile phones, which are available free or for a small price at appropriate stores. The economic and technological future for a business has proven fairly unpredictable and uncertain. Cloud can provide a stable infrastructure that is available when it is needed despite the industry and market pressure facing the customer. When adopting a cloud solution, the customer is eliminating capital expenses and transferring ownership of the IT infrastructure to a service provider, but they are also transferring all or some of the risk related to IT management. This can be important for some businesses that are not in the position to address the risk properly or simply not aware of the risks involved because IT is not their business. Without the need for capital expenditures in IT, a business can also reduce their expenditures for maintaining an IT environment, such as raised floors, cooling systems, etc. Because cloud services can be accessed anywhere, it is possible to create a true collaborative business without any walls, where all employees work together but are remote from each other. As the sixth law of cloudonomics indicates, cloud solutions will typically be more secure from attack than most businesses simply because of the size of their operations. Along these lines, cloud providers will also be more likely to comply with government and industry standards and regulations. Therefore, it’s safe to say that a business can increase the success of their business processes by leveraging the resources provided by a cloud provider. For businesses that are conscious about their impact outside of the company walls and customer base, clouds offer a great opportunity for reducing the company’s carbon footprint. Pooling and sharing resources is a major characteristic of cloud computing. With cloud solutions over a public cloud, a business is sharing resources with other customers. The result is an infrastructure that utilizes less power, less natural resources, and makes less of an imprint on the environment than if each business supported their own infrastructure. Cloud providers can easily choose the locations of their data centers on “greenfield sites” and even adopt expensive green technologies. For the business, this requires researching the environmental consciousness of potential cloud providers but, with the right partner, a business can participant in planetary concerns.

32 Considerations regarding Clouds
Using cloud computing is dependent on the organization itself and their relationship to computing: Is the organization a user or provider of IT services? Does the organization have a steady demand on IT or is the demand seemingly uncontrolled in a business year? Does the organization have business processes which utilize sensitive or restricted data or information? Not every cloud solution will be right for every organization, and choosing how cloud services are adopted is the first decision required in adoption. Many providers of cloud services were first users of the same services. Google and Amazon are great examples of companies who created a computing environment which could be shared with customers as a service. Many organizations will not be in the position to provide cloud services in the same manner as these two companies, but they may have some technology (application) which has potential value to the market place. Other organizations are simply looking for an IT solution that will work effectively in their environment. Deciding how you will be using cloud resources will be the first step in determining your approach. Many industries have rapid and dynamic changes in resource demands. Retailers will often experience a surge of resources during holiday seasons or promotional releases of popular products. Heath and insurance may experience a rise in demand based on geography because of traumatic events and natural disasters. Software developers will need additional resources when testing new applications. Marketing firms and research companies need resources when analyzing data, which are not used when simply publishing reports. Cloud resources allow organizations to execute business processes without committing to any capital expenses in IT, enabling these processes to be more efficient overall or responding rapidly to dynamic changes resulting from unexpected events. Organizations can create models which supplement existing IT resources with cloud resources to handle additional demands. Some business processes may not be appropriately serviced by external services, especially when they deal with sensitive or confidential data which is regulated. The creation of internal clouds is typically driven by the heightened need for security, privacy, or regulatory controls. While the customer is taking on the financial and business responsibility of maintaining the cloud, they are also taking advantage of the strengths and efficiencies associated with cloud computing for all business operations. Copyright: The Art of Service 2008

33 Effectively using Cloud Services
Supporting a well-defined business function or process activity. Functions include , calendars, customer relationship management, and enterprise resource management. Activities include data analysis, data storage, transaction management, and identity management. Plan-Do-Check-Act The best approach to adopting cloud services is to start small and use the PDCA cycle in adopting individual services. Focus on areas in the business, which are not critical to the core business, where the costs of the business are rising, or where the demand on IT services is relatively unpredictable. Prioritize these areas and begin research to determine the appropriate options available to the organization. Don’t restrict your research to one type of cloud service. An attractive SaaS solution may not meet all the functional requirements, while a PaaS solution may move the organization’s original intent in adopting cloud services or vice versa. Encourage the use of piloting to determine if the chosen cloud solution is the right fit to the organization, not just in meeting the business requirements, but in the ease of adopting the solution by the workforce. Cloud solutions which leverage basic principles of social networking will generally have better adoption rates than solutions which are more controlling and harder to work with. Unfortunately, the need for rapid deployment and adoption of solutions is becoming extremely critical in the business world, as technologies are advancing quicker than even a decade ago. In many cases, the adoption of cloud services may be easier for ‘backroom’ functions, such as backups, data analysis, and software testing than for customer facing functions which may need several changes before they are fully adopted. Do not take any issue, risk, or opportunity lightly. Make sure processes are in place to investigate, report, and decide on any new information regarding cloud services. As an organization’s cloud capabilities increase, the need for collaboration and enterprise level oversight becomes increasingly important to maintain. Copyright: The Art of Service 2008

34 Standards for Cloud Computing
The aforementioned NIST definition for Cloud Computing is the only internationally recognized “standard” for cloud computing, though several groups are working to create standards, such as DMTF, OMG, SNIA, and ETSI. IEEE has announced two working groups to create standards related to: P2301 – Cloud Interoperability and Portability Profiles P2302 – Intercloud Interoperability and Federation A main driver of standardization in cloud computing is the idea of Intercloud computing or Federated clouds: the ability for diverse cloud services to work together to meet a customer’s needs. If successful, this capability will allow a customer to create an entire computing environment using multiple cloud services from different providers. Currently, this type of model is only available for organizations who invest in creating proprietary hubs to communicate and interact with diverse services using a wide range of protocols and methodologies. Copyright: The Art of Service 2008

35 Moving Forward Use the document, Developing Virtualization and Cloud Computing Capabilities, to determine and act on your organization’s requirements regarding these two technologies and how to effectively use the aids and templates provided in the toolkit. Copyright: The Art of Service 2008


Download ppt "Understanding Virtualization"

Similar presentations


Ads by Google