Presentation is loading. Please wait.

Presentation is loading. Please wait.

AWS Big Data High Velocity Data Analysis QMUL Tom Woodyer

Similar presentations


Presentation on theme: "AWS Big Data High Velocity Data Analysis QMUL Tom Woodyer"— Presentation transcript:

1 AWS Big Data High Velocity Data Analysis QMUL Tom Woodyer

2 Overview: Big Data What is big data?
The collection and analysis of large amounts of data to answer questions and create competitive advantages. As we become a more digital society, the amount of data being created and collected is growing and accelerating significantly. Analysis of this ever-growing data becomes a challenge with traditional analytical tools. We require innovation to bridge the gap between data being generated and data that can be analyzed effectively. The concept of “big data” is not just about the collection and analysis of the data. The actual value for organizations in their data is when there are questions that can be answered and that can create competitive advantages for the organization. Image:

3 When does data become “big data”?
Overview: Big Data When does data become “big data”? When data sets become so large that you have issues collecting, storing, organizing, analyzing, moving, and sharing them. The velocity, volume, and variety of data outgrows your ability to process it. As data is collected, the sheer volume of it can cause issues, especially in on-premises environments. With the accumulation of more and more data, it becomes increasingly difficult to move the data to the applications that need to process it. In these circumstances, it is much simpler to move the applications to the data instead. This is very difficult to do in on-premises environments. In response, the rise of cloud providers has facilitated the ability to move applications to the data more easily. Multiple sources of data result in large volumes, velocity, and variety: Computer-generated data Application server logs: websites, games Sensor data: weather, water, smart grids Images and videos: traffic, security cameras Human-generated data Blogs, reviews, s, pictures Social media/graph analysis Brand perception Computer-generated data can vary from semi-structured logs to unstructured binaries. This data source can produce pattern-matching or correlations that generate recommendations (for social networking and online gaming in particular). You can also use computer-generated data to track application or service behavior. Human-generated data includes searches (legal fact discovery), natural language processing, sentiment analysis on products or companies, and product recommendations. Social graph analysis can produce product recommendations based on your circle of friends, jobs you may find interesting, or even reminders based on your circle of friends (birthdays, anniversaries, and so on). Image:

4 Big data use cases by industry
Ability to effectively analyze big data from multiple sources adds value across sectors Recommend- ations Transaction analysis Retail Bio-sensors Clinical analytics Consumer health Anti-virus Fraud detection Image recognition Security Pipeline sensors Gas meters Oil and Gas Demographics Usage analysis In-game metrics Gaming Social media Media, advertising Targeted advertising Image and video processing Big data can provide insight into nearly any industry. You can use big data to create competitive advantages in a wide variety of industries: Advertising: Processing, analyzing clickstream and impression logs, targeted advertising Streaming media: Recommendations, pattern matching, media encoding, file processing Oil & gas: Gas meters, pipeline sensors Retail: Product recommendations, fraud detection, sentiment analysis, transaction analysis Consumer health: Medical records analysis, clinical analysis, bio sensors Security: Threat analysis and detection, security analytics, anti-virus, image recognition Social media: User demographics, brand perception Gaming: In-game analytics, usage analysis Web and mobile apps: Identify user trends, web indexing, log processing and analytics Travel: Travel recommendations, dynamic pricing Airlines: Customer data mining, dynamic pricing Telecom: Analyze call records, operational metrics Government: Analyze public data sets, drive research Business intelligence: Data warehousing, data visualization

5 The Big Data "Pipeline" Data Insight Time-to-answer (latency) Collect
Process & Analyze Visualize Data Insight Store Time-to-answer (latency) - Balance of throughput and cost The general flow of a big data pipeline starts with data and ends with insight. How you get from start to finish depends on a host of factors, however most big data workflows follow the following pattern: Data is ingested (collected) by an appropriate tool. This data then has to be stored in a persistent way. The data then needs to be processed and/or analyzed. The data processing/analysis solution takes the data from storage, performs operations, then stores the data again. This data can then be used by other processing/analysis tools or by the same tool again to get further answers from the data. To make answers useful to business users, they are typically then visualized using what's called a business intelligence (BI) tool. Once the appropriate answers have been presented to the user, this allows them to have insight into the data that they can then take and use to make further business decisions. Note: not all big data solutions must end in visualization. Many solutions, such as machine learning and other predictive analytics, may just feed these answers programmatically into other software or applications, which will extract the insight on their own and respond as designed. The tools you choose to deploy in your pipeline are what are going to determine your "time to answer", also known as the latency between when your data is created and when you are able to get insight from it. The best way to architect this solution while taking this into consideration is to determine how you're going to balance throughput with cost, as generally speaking a higher throughput (and subsequent reduced latency) means a higher cost.

6 Big Data on AWS Immediate Availability. Deploy instantly. There is no hardware to procure and no infrastructure to maintain and scale Broad and Deep Capabilities. Over 50 services and 100s of features to support virtually any big data application and workload Trusted and Secure. Services are designed to meet the strictest requirements and are continuously audited, including for certifications such as ISO 27001, FedRAMP, DoD CSM, and PCI DSS. Hundreds of Partners and Solutions. Get help from a consulting partner or choose from hundreds of tools and applications across the entire data management stack. When working with big data, you will not just be working with one type of data or one type of tool. Every organization is different in what their needs are, but you will generally be incorporating a variety of data and services into your big data ecosystem on AWS. For example, you may be using Amazon Elastic MapReduce (EMR) and Amazon Redshift to implement Hadoop for batch-processing workloads and a data warehouse for querying data from your BI and analytics tools. AWS provides services and infrastructure for all of your big data ecosystem needs, and we will cover those throughout this course.

7 Where do AWS solutions map to the pipeline?
Collect Store Process & Analyze Visualize Real-time Amazon Kinesis Firehose Data Import Amazon Import/Export Snowball Message Queuing Amazon SQA Web/app Servers Amazon EC2 Object Storage Amazon S3 Amazon Glacier Real-time Amazon Kinesis Streams RDBMS Amazon RDS NoSQL DynamoDB Search Amazon CloudSearch IoT Amazon IoT Hadoop Ecosystem Amazon EMR Real-time AWS Lambda Amazon Kinesis Analytics Data Warehousing Amazon Redshift Machine Learning Amazon Machine Learning Elastic Search Analytics Amazon ElasticSearch Process & Move Data AWS Data Pipeline Business Intelligence & Data Visualization Amazon QuickSight Elastic Search Analytics Amazon ElasticSearch On AWS, collection and ingestion can be accomplished using Amazon EC2 instances running Kinesis-enabled or other applications designed for ingestion. AWS Data Pipeline can also be used to ingest data from on-premises sources into AWS. We have a wide variety of storage solutions; however, the most commonly used in big data environments running on AWS is Amazon S3. We'll go over these storage solutions later on in the course. It's also important to note that due to the nature of big data, storage will be used both before and after the analysis and processing phases, for storing results for viewing and for further analysis or processing. The primary tool for big data processing in the AWS big data ecosystem is Amazon EMR, and we'll talk about that service in the next several modules. Amazon Redshift, our petabyte-scale data warehouse solution, will be detailed on day three of the course, and is the primary solution for finding more detailed answers from analyses of your data. We'll also go over Amazon Kinesis, our streaming data processing solution, and AWS Data Pipeline, our data transformation solution, in later modules. Some of these tools are often used in more than one of these categories.

8 The Six Advantages and Benefits of Cloud Computing on AWS

9 The Six Advantages And Benefits of Cloud Computing on AWS: #1
Trade capital expense for flexible expense. Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can only pay when you consume computing resources, and only pay for how much you consume.

10 The Six Advantages And Benefits of Cloud Computing on AWS: #2
Benefit from massive economies of scale. By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers are aggregated in the cloud, providers such as Amazon Web Services can achieve higher economies of scale which translates into lower pay as you go prices.

11 The Six Advantages And Benefits of Cloud Computing on AWS: #3
Eliminate guessing on your capacity needs. Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often either end up sitting on expensive idle resources or dealing with limited capacity. With Cloud Computing, these problems go away. You can access as much or as little as you need, and scale up and down as required with only a few minutes notice.

12 The Six Advantages And Benefits of Cloud Computing on AWS: #4
Increase speed and agility. In a cloud computing environment, new IT resources are only ever a click away, which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

13 The Six Advantages And Benefits of Cloud Computing on AWS: #5
Stop spending money on running and maintaining data centers. Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking and powering servers.

14 The Six Advantages And Benefits of Cloud Computing on AWS: #6
Go global in minutes. Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide a lower latency and better experience for your customers simply and at minimal cost. For more on the innovations that make the AWS cloud unique, see:

15 AWS Global Infrastructure

16 AWS Data Centers A single data center typically houses several thousands of servers. All data centers are online. No data center is “cold.” AWS custom network equipment: Multi-ODM sourced Amazon custom network protocol stack Data center AWS data centers are built in clusters in various global regions. Larger data centers are undesirable All data centers are online and serving customers; no data center is “cold”; in case of failure, automated processes move customer data traffic away from the affected area. Core applications are deployed in an N+1 configuration, so that in the event of a data center failure, there is sufficient capacity to enable traffic to be load-balanced to the remaining sites. "ODM" refers to "original design manufacturer", which designs and manufactures products based on specifications from a second company. The second company then rebrands the products for sale.

17 AWS Availability Zones (AZ)
Each Availability Zone is: Made up of one or more data centers. Designed for fault isolation. Interconnected with other Availability Zones using high-speed private links. You choose your Availability Zones. AWS recommends replicating across AZs for resiliency. Data Center Data Center Data Center Data Center AWS data centers are organized into Availability Zones (AZ). Each Availability Zone comprises one or more data centers, with some Availability Zones having as many as six data centers. However, no data center can be part of two Availability Zones. Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower-risk flood plains (specific flood-zone categorization varies by region). In addition to having discrete uninterruptable power supply and onsite backup generation facilities, they are each fed via different grids from independent utilities to further reduce single points of failure. Availability Zones are all redundantly connected to multiple tier-1 transit providers. You are responsible for selecting the Availability Zones where your systems will reside. Systems can span multiple Availability Zones. You should design your systems to survive temporary or prolonged failure of an Availability Zone if a disaster occurs. Distributing applications across multiple Availability Zones allows them to remain resilient in most failure situations, including natural disasters or system failures.

18 AWS Regions AWS Region Each region is made up of two or more Availability Zones. AWS has 12+ regions worldwide. You enable and control data replication across regions. Communication between regions uses public Internet infrastructure. Availability Zone Data center Availability Zone Availability Zone Availability Zones are further grouped into regions. Each AWS Region contains two or more Availability Zones. When distributing applications across multiple Availability Zones, you should be aware of location-dependent privacy and compliance requirements, such as the EU Data Privacy Directive. When you store data in a specific region, it is not replicated outside that region. AWS never moves your data out of the region you put it in. It is your responsibility to replicate data across regions, if your business needs require that. AWS provides information about the country, and, where applicable, the state where each region resides; you are responsible for selecting the region to store data in based on your compliance and network latency requirements. Data is not replicated between regions unless the customer does it, so customers with these types of data placement and privacy requirements can establish compliant environments. Note that all communications between regions is across public Internet infrastructure; therefore, use appropriate encryption methods to protect sensitive data.

19 AWS Regions AWS is steadily expanding its global infrastructure to help customers achieve lower latency and higher throughput and to ensure that your data resides only in the region you specify. As you and all customers grow your businesses, AWS will continue to provide infrastructure that meets your global requirements. The isolated GovCloud (US) Region is designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. AWS products and services are available by region so you may not see all regions available for a given service. You can run applications and workloads from a region to reduce latency to end users while avoiding the up-front expenses, long-term commitments, and scaling challenges associated with maintaining and operating a global infrastructure. For more information about global infrastructure, see aws/globalinfrastructure/

20 AWS Edge Locations 12 AWS Regions 50+ AWS Edge Locations
AWS edge locations provide local points-of-presence that commonly support AWS services like Amazon Route 53 and Amazon CloudFront. Edge locations help lower latency and improve performance for end users. For a more detailed look at AWS edge locations, see: aws/global-infrastructure/

21 Hybrid Deployment Case Study:
The Weather Company

22 Hybrid Deployment on AWS: The Weather Company (TWC)
Challenge: TWC provides weather forecasts, data, and other content to millions of people across the world every day. They use data to provide analytic services on the relationship between weather and consumer behavior. They were operating 13 data centers with legacy systems and needed a cost-effective, scalable alternative. Building and designing for the cloud is a different philosophy and mind set, and certainly a different technical approach. Bryson Koehler EVP, CTO, CIO, The Weather Company The Weather Company provides millions of people with the world’s best weather forecasts, content and data, every day.

23 Hybrid Deployment on AWS: The Weather Company (TWC)
Solution: Data center migration TWC designed a new weather forecast and data services platform powered by NoSQL databases running on Amazon EC2 instances. They reduced their on-premises IT environment from 13 to six datacenters. All data is stored in Amazon S3, even for on- premises workloads. TWC reaches a global audience by deploying across multiple AZs in the Northern Virginia, California, Ireland, and Singapore regions. We deploy about 90% of our applications and systems on AWS — and we have the flexibility to easily port applications and systems as necessary for the business. Bryson Koehler EVP, CTO, CIO, The Weather Company

24 Hybrid Deployment on AWS: The Weather Company (TWC)
Using AWS, TWC can scale as necessary to handle constantly changing workloads and maintain our 11-millisecond response time. Bryson Koehler EVP, CTO, CIO, The Weather Company Benefits: TWC now ingests, stores, and analyzes 4 GB of weather data per second from over 100 sources. Their platform can handle more than 15 billion API calls each day at a rate of 150,000 per second. By using AWS APIs, any developer (internal or external) can develop services for the data platform. More than 150,000 external developers worldwide are now registered to use the platform. For more on this case study, see: weather-company/

25 All-In Deployment Case Study:
AdRoll

26 AdRoll is a global leader in digital advertising retargeting products.
All-in on AWS: AdRoll Offers customers its Real-Time Bidding (RTB) platform to create personalized ad campaigns using their own website data Offer an SLA of 100ms worldwide bid response time, but their skyrocketing data collection was putting that in danger Needed to import and process 30 TB of compressed data and 60 billion advertiser requests per day, so they moved all-in to the AWS Cloud. AdRoll now manages its RTB platform using Amazon EC2, Amazon DynamoDB, and Amazon S3. We’ve been able to seamlessly scale our infrastructure and reduce our fixed costs by 75% and operational costs by 83%. Valentino Volonghi CTO, AdRoll AdRoll is a global leader in digital advertising retargeting products.

27 All-in on AWS: AdRoll How did they do that?
By moving to the AWS Cloud, AdRoll reduced their annual operational costs by 83% and fixed costs by 75%. How did they do that? By leveraging the full capability of a cloud-based IT environment. Here are a few examples: 2,500 instances running 8 hours/day average worldwide Per instance cost less than $0.05 per day Traffic to their Amazon S3 buckets nearly as high as some of the most visited properties on the web Deployed a 180 TB storage environment for under 10% of the on-prem equivalent Use all 3 Amazon EC2 pricing models Serve large quantities of data directly from Amazon S3 AdRoll has customized their AWS environment to meet their platform's needs in a cost- effective and highly performant manner. The two examples here are just some of the ways they thought critically about not just using their AWS resources in place of on-prem resources, but using them intelligently to optimize the benefits they get out of operating all- in on AWS.

28 “ ” All-in on AWS: AdRoll
If we wanted to replicate our AWS environment on premises, we would have to operate four data centers with on-call staff in each location, provision nearly 1,000 machines in each location, add an additional percent capacity for cold storage, and develop code for auto-scaling and a common API. Valentino Volonghi CTO, AdRoll Also, AdRoll estimates that it would need at least 20 full-time engineers to effectively manage a physical environment: 8 full-time employees on call across four different data centers (two in each location for redundancy) 1 product manager for auto-scaling and API management 5 engineers to develop and maintain auto-scaling across the infrastructure 5 engineers to maintain Cassandra installation instead of DynamoDB 1 engineering manager The overall annual staffing costs, with an average engineering salary of $150,000, would be $3 million. For more on the AdRoll case study, see: studies/adroll/

29 Executive Vice President and CTO
Case study: MLBAM MLBAM is the interactive media and Internet company of Major League Baseball. MLBAM Sought a platform capable of scaling up and down quickly to handle data on game plays. MLBAM chose AWS to ingest, analyze, and store 17+ petabytes of baseball data per season It takes less than 12 seconds to capture, analyze, and deliver data to broadcasters for on-air analysis. Statcast scales to handle up to 13 games per day, or just one or two daily, and shuts down in the off season. AWS helps MLBAM deliver new data in new ways to attract more fans. By using AWS to power Statcast… we’re ensuring that our sport is relevant, important, and the center of life for the next generation of fans. Joe Inzerillo Executive Vice President and CTO MLB Advanced Media MLBAM is one example of a customer running big data workloads on AWS that combines a number of different data types and infrastructure to accomplish their goals. They use Amazon Kinesis, AWS Lambda, Amazon EC2, Amazon S3, Amazon ElastiCache, and AWS Direct Connect as part of their big data analytics infrastructure. Customers will also use third-party, big data ecosystem, and open-source tools as part of their big data infrastructure on AWS. MLBAM uses Elasticsearch extensively on its advanced game day statistics application. “Elasticsearch allows us to easily and quickly build bleeding edge big data and analytics applications using the ELK stack,” says Sean Curtis, Architect at MLB.com. STORY BACKGROUND MLB Advanced Media is the interactive media and Internet division of Major League Baseball Needed an agile, highly scalable platform for its Player Tracking System, which is marketed to broadcasters as Statcast SOLUTION & BENEFITS MLBAM decided to use AWS instead of building an in-house system to ingest, analyze, and store data captured during games Their System: Captures data from games in less than 12 seconds by using a “black box” analytics engine that leverages AWS Lambda for analysis. Scales up and down to accommodate variations in game schedules, with as few as one to two games a day or up to 13 games. Delivers an exciting new way for fans, broadcasters, and teams to understand the nuances of the game. © 2016 Amazon Web Services, Inc. and its affiliates. All rights reserved.

30 Video: MLBAM Builds New Player Tracking System on AWS Using Power of Big Data Analytics
Everything starts on the field. They have players, umpires, and the ball moving around, and they track that with stereoscopic machine vision and radar. The combination of those is gathered and assembled at the ball park and then sent into the AWS cloud for processing. The arithmetic of data points flowing around the field, turning into things like a home run or route efficiency, all happens  in AWS on EC2 instances. They are using some of the high- level AWS services to trigger other services like Lambda and Kinesis, and then some of the distribution-side services like S3 and CloudFront in order to get the results  to their customer. They have a lot of data coming in at the same time and a lot of processes that have to kick off in parallel. Kinesis is a great single-job processor.

31 Application Back-End Example:
Mobile Gaming

32 Mobile Games Mobile app revenue is projected to grow continuously.
Mobile back-end technologies HTTP-based External Social APIs Save state Database Static data store Mobile push Analytics Mobile online features may include: Social login Friends Leaderboards Push messages Analytics Mobile gaming customers that use AWS: Supercell with Hay Day and Clash of Clans Halfbrick with Fruit Ninja Wooga with Diamond Dash and Pocket Village Ubisoft, which leveraged AWS to launch 10 social games in 18 months SEGA, which migrated forums and some game servers to AWS and reported a 50% savings in server costs FunPlus, a successful Facebook game company that began without physical servers but quickly moved to AWS to gain agility and reduce costs

33 Mobile Games Back-End Concepts
Amazon CloudFront Think in terms of APIs GET friends, leaderboards HTTP + JSON Multiplayer servers Binary assets Game analytics Elastic Load Balancing EC2 Elastic Beanstalk Container Amazon S3 Game back ends are looking more and more like web applications. Lots of calls map to REST: get friends, leaderboards, active games, and even login. With multiplayer games, the core APIs, such as listing servers and joining games, map well to REST calls. Use stateful sockets sparingly. Amazon S3 for game data such as assets, UGC (User Generated Content), and analytics Auto Scaling group Capacity on demand Respond to users ElastiCache to offload the database Memcached Redis CloudFront CDN DLC (Downloadable Content), assets Game saves UGC Beanstalk manages Elastic Load Balancing Amazon EC2 Auto Scaling Monitoring Amazon RDS

34 What Information Can Help Improve Your Game?
Sentiment analysis Enjoying game Engaged Like/dislike new content Stuck on a level Bored Abandonment Players’ behavior Hours played day/week Number of sessions/day Level progression Friend invites/referrals Response to mobile push Money spent/week To get insight into how your application is perceived, it can be helpful to collect data about the behaviors of your users. This means ingesting and processing streams of data as your application is being used, and then performing analysis on that data to gain insights from it. This slide includes some examples of how to use data analysis to gain these kinds of business insights.

35 Data Analytics For Gaming
Batch processing examples: What game modes do people like best? How many people have downloaded DLC pack 2? Where do most people die on map 4? How many daily players are there on average? Real-time processing examples: What game modes are people playing now? Are more or fewer people downloading DLC today? Are people dying in the same places, or different places? How many people are playing today? Any variance? Downloadable content (DLC) is additional content for a video game distributed through the Internet by the game’s official publisher or other third-party content producers. DLC can be of several types, ranging from aesthetic outfit changes to a new, extensive storyline, similar to an expansion pack. Both types of analysis are valuable, but obviously, when you can quickly determine what your players love, you can quickly make your game have more of that quality.

36 Reference Architecture: Data Analytics For Gaming
Clickstream archive Aggregate statistics In-game engagement trend analysis Clickstream processing app Clicks Use cases for data analytics Tracking player performance to keep games balanced and fair. Track game performance metrics by geography/region for online games where players sign in from around the world. Tracking sales performance of items sold in-game. Tracking performance of online ads served in-game. Accelerating to near real-time the delivery of the metrics described above. Continual in-game metrics on user engagement. Real-time analytics on user engagement in the game. Optimized predictions of when/where to sell in-game purchases. Business benefits for data analytics Reduce operational burden on your developers. Scale your data analytics to match highly variable gaming loads without overpaying for spare capacity or increased time to results when usage is up. Try many more data experiments and find truly game-changing new metrics and analysis. Ingest and aggregate gaming logs that were previously uncollected or unused. Accelerate the time to results of batch processing of game logs [for example] to reduce the delivery of gaming metrics from every 48 hours to every 10 minutes. Deliver continuous/real-time game insight data from hundreds of game servers.

37 Analytics pipeline for Clash of Clans
Case Study: Supercell Analytics pipeline for Clash of Clans EC2: In-game trends dashboard In-game activity Amazon S3: Aggregate statistics Amazon EC2: Real-time Kinesis apps Kinesis Redshift Finland-based Supercell, founded in 2010 by six game-industry veterans, is one of the fastest-growing social game developers in the world. With a staff of just over 100 employees, Supercell has three games—Hay Day (a social farming game), Clash of Clans (a social resource management game with strategic combat), and Boom Beach (combat strategy game, released in March 2014)—that attract tens of millions of players on iOS and Android devices every day. Here, we are looking at Clash of Clans. In addition to loading into Redshift for business intelligence tasks, the data is also moved into Amazon S3 for long-term archiving. If the team ever needs to recover data, load retrospective information into a new data warehouse instance, or augment their reports with data that was not originally available, they can restore the vaults from Amazon Glacier. Using Kinesis with Amazon EC2, Amazon S3, Amazon Glacier, and Redshift gives Supercell a complete, 360-degree view of how their customers are playing their games and provides the resources to dive into that data to answer specific questions such as who are the high scorers, which areas of the game world are most popular, or how their most recent updates are performing. “We are using Amazon Kinesis for real-time delivery of game insight data sent by hundreds of our game engine servers. Amazon Kinesis also offloads a lot of developer burden in building a real-time, streaming data ingestion platform, and enables Supercell to focus on delivering games that delight players worldwide,” says Sami Yliharju, Services Lead at Supercell. You can read more about the Supercell case study at

38 Thank you! Any Questions?

39


Download ppt "AWS Big Data High Velocity Data Analysis QMUL Tom Woodyer"

Similar presentations


Ads by Google