QCloud Queensland Cloud Data Storage and Services 27Mar2012 QCloud1.

Slides:



Advertisements
Similar presentations
© SERVICE PORTFOLIO SOLUTIONS LTD, 2008 IT – cost overhead or enabling the business.
Advertisements

Some thoughts on data, software and tools and who knows how to do what Rhys Francis Executive Director The Australian eResearch Infrastructure Council.
MANAGED SERVICES - Best Practices for Security Solutions
STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
Cloud Storage in Czech Republic Czech national Cloud Storage and Data Repository project.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
OpenStack Update Infrastructure as a Service May 23 nd 2012 Rob Hirschfeld, Dell.
Dr Joe Young ITS-HPC and Research Support.
VO Sandpit, November 2009 NERC Big Data And what’s in it for NCEO? June 2014 Victoria Bennett CEDA (Centre for Environmental Data Archival)
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
UMF Cloud
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
“Grandpa’s up there somewhere.”. Making your IT skills virtual What it takes to move your services to the cloud Erik Mitchell | Kevin Gilbertson | Jean-Paul.
Road to the Cloud The Economics of Cloud Computing.
1 Introduction to Cloud Computing Jian Tang 01/19/2012.
AAF Middleware update February Presented by Terry Smith Technical Manager and Heath Marks Manager.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Virtualization. Virtualization  In computing, virtualization is a broad term that refers to the abstraction of computer resources  It is "a technique.
Ceph Storage in OpenStack Part 2 openstack-ch,
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
Cloud Computing Characteristics A service provided by large internet-based specialised data centres that offers storage, processing and computer resources.
IODE Ocean Data Portal - technological framework of new IODE system Dr. Sergey Belov, et al. Partnership Centre for the IODE Ocean Data Portal MINCyT,
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
IPlant Collaborative Hands-on Cyberinfrastructure Workshop - Part 1 R. Walls University of Arizona Biodiversity Information Standards (TDWG) Sep. 28, 2015,
Tim 18/09/2015 2Tim Bell - Australian Bureau of Meteorology Visit.
VMware vSphere Configuration and Management v6
February 20, 2006 Nodal Architecture Overview Jeyant Tamby 20 Feb 2006.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Request for Service (RFS) Process and Metrics Update June 24, 2008.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
MidVision Enables Clients to Rent IBM WebSphere for Development, Test, and Peak Production Workloads in the Cloud on Microsoft Azure MICROSOFT AZURE ISV.
3May20111QCIF ENABLING RESEARCH HIGH PERFORMANCE INFRASTRUCTURE & SERVICES QUESTnet ARCS Tools Workshop 3May2011.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Galaxy Community Conference July 27, 2012 The National Center for Genome Analysis Support and Galaxy William K. Barnett, Ph.D. (Director) Richard LeDuc,
LHC Computing, CERN, & Federated Identities
Ulrich (Uli) Homann Chief Architect, WW Enterprise Services Microsoft Corporation SESSION CODE: ARC305.
Understanding IT Costs Peter James and Martin Bennett Jisc Costing IT Services (CITS) Project London, 10 th February 2014.
12 NOVEMBER 2015 LoS Engagement in the Netherlands The Support4research project Jan Bot, SURFsara Support4research team SURF.
HUIT Cloud Initiative Update November, /20/2013 Ryan Frazier & Rob Parrott.
© 2014 VMware Inc. All rights reserved. Cloud Archive for vCloud ® Air™ High-level Overview August, 2015 Date.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
IT-DSS Alberto Pace2 ? Detecting particles (experiments) Accelerating particle beams Large-scale computing (Analysis) Discovery We are here The mission.
1 This Changes Everything: Accelerating Scientific Discovery through High Performance Digital Infrastructure CANARIE’s Research Software.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
IODE Ocean Data Portal - technological framework of new IODE system Dr. Sergey Belov, et al. Partnership Centre for the IODE Ocean Data Portal.
ARCHER Building data and information management tools for the complete research life-cycle July 2006.
A Technology Pathway eRSA’s Next Five Years Mary Hobson CEO.
EGI-InSPIRE RI An Introduction to European Grid Infrastructure (EGI) March An Introduction to the European Grid Infrastructure.
Accessing the VI-SEEM infrastructure
What is HPC? High Performance Computing (HPC)
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
Provisioning 160,000 cores with HEPCloud at SC17
National e-Infrastructure Vision
Mirjam van Daalen, (Stephan Egli, Derek Feichtinger) :: Paul Scherrer Institut Status Report PSI PaNDaaS2 meeting Grenoble 6 – 7 July 2016.
Microsoft Azure Platform Powers New Elements Constellation Software Suite to Deliver Invaluable Insights From Your Data for Marketing and Sales MICROSOFT.
Long-Term Storage Service for Memory Institutions and Research Centers
CloneManager® Helps Users Harness the Power of Microsoft Azure to Clone and Migrate Systems into the Cloud Cost-Effectively and Securely MICROSOFT AZURE.
Introduce yourself Presented by
WIS Strategy – WIS 2.0 Submitted by: Matteo Dell’Acqua(CBS) (Doc 5b)
Dana Kaufman SQL Server Appliance Engineering
Technical Capabilities
The Sense Add-On Module for MarineCFO’s Vessel 365 Solution Uses Microsoft Azure Services and IoT Technologies to Provide Advanced Data Analytics MICROSOFT.
Presentation transcript:

QCloud Queensland Cloud Data Storage and Services 27Mar2012 QCloud1

NATIONAL RDSI 6 primary, 4 additional nodes PB by Gb/sec node interconnect Storage for large national collections Shared access by research communities Research data management On-shore data cloud Low-cost research production quality of service 27Mar2012QCloud2

NATIONAL NeCTAR RESEARCH CLOUD 6 primary nodes (maybe 7) 24,000 cores by 2013 On-demand compute capacity for research Hosted shared services for research communities Virtual laboratories Research tools, workflows Uni of Melbourne QCIF/UQ ANU Monash Uni 27Mar2012QCloud3

QCLOUD COMPONENTS RDSI DATASTORE NeCTAR RESEARCH CLOUD QERN CLOUD GENOMICS CLOUD 5x >15x 27Mar2012QCloud4

QUEENSLAND RDSI NODES ~ 30 Petabytes by 2014 Primary node hosted at UQ (embargoed) Additional Node at JCU (awaiting decision) National focus: ecosystems, genomics 75% Merit Allocated Collections 25% “Collection Development” Can add RDSI+ commercial research storage 27Mar2012QCloud5

COLLECTIONS Market Data Collections Active research projects Collaborations across multiple locations Fast disk, flash memory – immediate access Market Data Collections Active research projects Collaborations across multiple locations Fast disk, flash memory – immediate access Vault Data Collections Rarely accessed data archival storage Broad availability Tape, slow disk Vault Data Collections Rarely accessed data archival storage Broad availability Tape, slow disk 27Mar2012QCloud6

RESEARCH COMMUNITY RDSI NODE SERVICES OWNER Qualification Capture Ingest Bulk Update Curation Metadata (with ANDS) Resilience (multiple copies) Management Entitlements to Access Monitoring Support, Help Desk, Training OWNER Qualification Capture Ingest Bulk Update Curation Metadata (with ANDS) Resilience (multiple copies) Management Entitlements to Access Monitoring Support, Help Desk, Training USER Entitlements to Access Discovery Read Update Duplicate Monitoring Support, Help Desk, Training USER Entitlements to Access Discovery Read Update Duplicate Monitoring Support, Help Desk, Training 27Mar2012QCloud7

TARGET RESEARCH USERS/SERVICES POWER USERS PB-scale data collections Low-level access services Specialised treatment POWER USERS PB-scale data collections Low-level access services Specialised treatment LARGE DATA USERS TB-scale data collections Direct data access, FTP Automated allocations LARGE DATA USERS TB-scale data collections Direct data access, FTP Automated allocations LONG-TAIL USERS GB-scale data collections Cloud access Dropbox style on PC Little allocation surveillance LONG-TAIL USERS GB-scale data collections Cloud access Dropbox style on PC Little allocation surveillance DATA COLLECTION SIZE NUMBER OF USERS 27Mar2012QCloud8

QCIF/UQ NeCTAR NODE 4,000 cores, up to 4,000 virtual machines Research service quaility 70% for NeCTAR Allocated Services 30% for Queensland Allocation 27Mar2012QCloud9

SERVICES HOSTED SERVICES Permanent web sites Research communities Virtual laboratories Research tools Collaboration HOSTED SERVICES Permanent web sites Research communities Virtual laboratories Research tools Collaboration COMPUTATION On-demand virtual machines Off-load “small” jobs from HPC/clusters COMPUTATION On-demand virtual machines Off-load “small” jobs from HPC/clusters 27Mar2012QCloud10

COMPUTE USERS/SERVICES LONG-TAIL USERS HOSTED SERVICES 1-4 cores Small HPC jobs LONG-TAIL USERS HOSTED SERVICES 1-4 cores Small HPC jobs NUMBER OF CORES REQUIRED NUMBER OF USERS Tier 1,2,3 HPC More than 16 cores Specialised treatment Tier 1,2,3 HPC More than 16 cores Specialised treatment MEDIUM COMPUTE USERS 4-16 cores Large memory VMs (1TB+) Automated allocations Smaller HPC jobs MEDIUM COMPUTE USERS 4-16 cores Large memory VMs (1TB+) Automated allocations Smaller HPC jobs 27Mar2012QCloud11

EARLY NODE (QERN) pQERN – Operational October 2011 – Pre-RDSI/NeCTAR – Genomics++ QERN – Delivered February – Operational ~April – 330 cores – 0.5 PB disk – RDSI/NeCTAR services – Early adopter researchers – Operational experience 27Mar2012QCloud12

PROJECTED NODE SCALE-UP 27Mar2012QCloud13 PB Cores NeCTAR Research Cloud RDSI Data Storage Today, QERN

PARTICIPANT ROLES 27Mar2012QCloud14 QCIF Governance, Management, Research Support RDSI Lead Agent NeCTAR Lead Agent UQ RCC Management, SysAdmin JCU Management, Operations UQ ITS Acquisition, Operations Partners Research Support Collection & Service Owners & Users

27Mar2012QCloud15

QERN Progress 27Mar2012QCloud16 Familiarisation Hardware installation Software systems Service Definition and Delivery COO and Solution Architect start Service catalogue Software stack Registration processes Trial usage Production Early adopters Basic services Continuing service introduction Primary Node (Jul2012) Mid Feb

QCLOUD PARTNERS QCIF UQ JCU CQU Griffith QUT USQ USC Bond CSIRO NICTA TERN DERM DTMR DEEDI Ergon Energy 27Mar2012QCloud17

FUNDING FOR QCLOUD 27Mar2012QCloud18 Cash $15M RDSI Primary NoDE $1.5M (contracting) NeCTAR Research Cloud $2M (contracting) Smart Futures Co-investment Fund $3.6M (signed) RDSI Additional NoDE $266K (awaiting decision) RDSI ReDS, DaSh, NRN funds $6.5M+ (projected) DERM, DTMR co-investments $600K (offered) QCIF QERN $200K In- kind $5M UQ Data centre facilities, operations and power consumption plus researcher support $2.3M JCU Data centre facilities, operations and power consumption plus researcher support $700K Other members, CSIRO, NeCTAR, NICTA, TERN, DERM, DTMR, Ergon Energy, QCIF researcher support $2M

Agreements Progress 27Mar2012QCloud19 Smart Futures Signed with DEEDI, Jan2012; funds flow in July Contingent on RDSI, NeCTAR RDSI Final adjustments in progress Signature expected within April NeCTAR Research Cloud Feedback provided to NeCTAR Signature expected within April Operations (UQ, JCU) Drafting agreement text RDSI, NeCTAR agreements as back-to-back schedules Data, service owners To be completed Preliminary arrangements for early users

RISK MANAGEMENT – QCIF Power upgrade at UQ DC2 (JCU also?) – Decision deferred to May Transfer to new UQ data centre – PACE building offer replacing DC2 – network Staff and skills availability Software and skills maturity Authentication, entitlements 27Mar2012QCloud20

RISK MANAGEMENT – NATIONAL Levels of interaction: Lead Agents, other nodes National support and help desk structure RDSI ReDS and DaSh Programs 27Mar2012QCloud21

RISK MANAGEMENT – LONG TERM RDSI value proposition to researchers Growth past 2014 Business model for sustainability Commercial service quality 27Mar2012QCloud22

QUESTIONS? 27Mar2012QCloud23