Pacific Rim Application and Grid Middleware Assembly (PRAGMA)1:

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Advertisements

21 st Century Science and Education for Global Economic Competition William Y.B. Chang Director, NSF Beijing Office NATIONAL SCIENCE FOUNDATION.
PRAGMA Futures Panel Philip Papadopoulos, UCSD All Opinions are mine and do not necessarily reflect those of my university, the US government or my cat.
Maines Sustainability Solutions Initiative (SSI) Focuses on research of the coupled dynamics of social- ecological systems (SES) and the translation of.
Virtualizing Lifemapper for PRAGMA: Step 2 - The Computational Tier By Aimee Stewart, Cindy Zheng, Phil Papadopoulos, C.J. Grady University of Kansas Biodiversity.
PRAGMA Overview and Future Directions Workshop on Building Collaborations in Clouds, HPC and Applications 17 July 2012.
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
EInfrastructures (Internet and Grids) US Resource Centers Perspective: implementation and execution challenges Alan Blatecky Executive Director SDSC.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
EInfrastructures (Internet and Grids) - 15 April 2004 Sharing ICT Resources – “Think Globally, Act Locally” A point-of-view from the United States Mary.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Welcome to HTCondor Week #14 (year #29 for our project)
Assessment of Core Services provided to USLHC by OSG.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
Climate Sciences: Use Case and Vision Summary Philip Kershaw CEDA, RAL Space, STFC.
Pacific Rim International University - Fostering Globally-leading Researchers in Integrated Sciences - Susumu Date Shoji Miyanaga Osaka University, Japan.
PRAGMA 25 Working Group Report Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD)
Cloud Computing in NASA Missions Dan Whorton CTO, Stinger Ghaffarian Technologies June 25, 2010 All material in RED will be updated.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
SG - OSG Improving Campus Research CI Through Leveraging and Integration: Developing a SURAgrid-OSG Collaboration John McGee, RENCI/OSG Engagement Coordinator.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
National Center for Supercomputing Applications Barbara S. Minsker, Ph.D. Associate Professor National Center for Supercomputing Applications and Department.
GRID ARCHITECTURE Chintan O.Patel. CS 551 Fall 2002 Workshop 1 Software Architectures 2 What is Grid ? "...a flexible, secure, coordinated resource- sharing.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
Cyberinfrastructure Overview of Demos Townsville, AU 28 – 31 March 2006 CREON/GLEON.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Q K-12 Blueprint Overview. 2 The K-12 Blueprint offers resources for education leaders involved in planning and implementing personalized learning.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
Bringing visibility to food security data results: harvests of PRAGMA and RDA Quan (Gabriel) Zhou, Venice Juanillas Ramil Mauleon, Jason Haga, Inna Kouper,
Dr. Ir. Yeffry Handoko Putra
MODELING CLIMATE CHANGE EFFECTS ON LAKES USING DISTRIBUTED COMPUTING
Accessing the VI-SEEM infrastructure
2nd GEO Data Providers workshop (20-21 April 2017, Florence, Italy)
Collaborative Innovation Communities: Bringing the Best Together
Science, Networks, CANS: A proposal for a new model of collaboration
Clouds , Grids and Clusters
RDA US Science workshop Arlington VA, Aug 2014 Cees de Laat with many slides from Ed Seidel/Rob Pennington.
Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.
Joslynn Lee – Data Science Educator
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Module development was supported by NSF DEB and ACI
Grid Computing.
Mentoring the Next Generation of Science Gateway Developers and Users
Putting All The Pieces Together: Developing a Cyberinfrastructure at the Georgia State University Library Tim Daniels, Learning Commons Coordinator Doug.
Connecting the European Grid Infrastructure to Research Communities
EXCITED Workshop Suvrajeet Sen, DMII, ENG – CI Coordinator Workshop
Carla Ellis Duke University
LifeWatch Cloud Computing Workshop
WIS Strategy – WIS 2.0 Submitted by: Matteo Dell’Acqua(CBS) (Doc 5b)
Managing Services with VMM and App Controller
GENI Global Environment for Network Innovation
Brian Matthews STFC EOSCpilot Brian Matthews STFC
Views of Cloud Computing
Brokering as a Core Element of EarthCube’s Cyberinfrastructure
Urban Infrastructure: Analysis and Modeling for Their Optimal Management and Operation NSF Workshop NSF Award #: Nada Marie Anid, Ph.D. Professor.
Expand portfolio of EGI services
Presentation transcript:

Pacific Rim Application and Grid Middleware Assembly (PRAGMA)1: Philip Papadopoulos, Ph.D. Chief Technology Officer, San Diego Supercomputer Center Research Scientist, Calit2 University of California San Diego Structural characteristics of community of practices are redefined to a domain of knowledge, a notion of community, and a practice  Community of Practice - Domain of knowledge – creates common ground, inspires members to participate, guides their learning and gives meaning to their action - Community – Notion of community creates social fabric for learning. Strong community fosters interactions / encourages willingness to share ideas - Practice - while the domain provides the general area of interest for the community, the practice is the specific focus around which the community develops, shares and maintains its core of knowledge The Pacific Rim Applications and Grid Middleware Assembly (PRAG­MA), established in 2002, is a robust, international network of research scientists from more than 30 institutions who address shared science and cyberin­frastructure challenges   PRAGMA pursues activities in four broad interdependent areas Fostering international scientific expeditions by forging teams of domain scientists and cyberinfrastructure researchers who develop and test necessary information technologies to solve specific scientific questions and create usable, international-scale, cyber environments; Developing and improving a grassroots, international cyberinfrastructure for testing, computer science insight and advancing scientific applications by sharing resources, expertise, and software; Infusing new ideas by developing new researchers with experience in cross-border science and by continuing to engage strategic partners; Building and enhancing the essential people-to-people trust and organization developed through regular face-to-face meetings—a core component of PRAGMA’s success. This talk will provide an overview of PRAGMA and highlight its accomplishments in its twelve years since inception, and will focus two current the scientific “virtual” expeditions, one on predicting freshwater quality in lake, the other on biodiversity in extreme environments. The talk will give some lesson’s learned in creating a distributed yet coordinated network of researchers, give examples from PRAGMA and from another ecological network, the Global Lakes Ecological Observatory Network, and mention opportunity for active student engagement in these networks. One goal for the talk is to explore how PRAGMA might help stimulate a discussion around such a network in biodiversity work in Southeast Asia and beyond. 1 US Participation funded by NSF Award OCI-1234983

PRAGMA Members and Affiliates Community of Practice Scientific Expeditions and Infrastructure Experiments for Pacific Rim Institutions and Researchers PRAGMA Members and Affiliates Why Did we start PRAGMA? - To realize a vision of technology working together - Means people have to work together! For individuals new to a PRAGMA workshop – welcome! As science becomes more distributed and collaborative, there are small to medium sized groups, distributed, that want to work together. PRAGMA is about enabling this long tail of science, through scientific expeditions and infrastructure experiments. We have always focused on the Pacific Rim. You can see the institutions involved. This includes the four new sites that were accepted as members at PRAGMA 24 Established in 2002

PRAGMA’s Expedition Model of Collaboration Domain Scientists Long-Term sustained effort to answer scientific question(s) through co-design Technology developers Lake Ecology: understand the processes that govern lake eutrophication Challenge: model lakes (simulation) with large ranges of inputs to understand algal blooms and fresh-water quality Biodiversity: understand how patterns of specifies diversity emerge and how they are sustained Challenge: Make Lifemapper simulation cluster portable to allow greater use by collaborators with restricted data PRAGMA Experimental Network Testbed (ENT): explore of software defined networking in international context Challenge: understand and evaluate Software Defined Networking capabilitieis at international scale Expeditions PA Thoughts about collaboration How do I know what to build if I’m not talking deeply with potential users? How can I understand their pain points if I’m not willing to site down and listen?

It’s NOT the capacity of the network – It’s connectivity! In general, data in the PRAGMA Context isn’t “big” Biodiversity data sets are: O(10s of GBs) By Policy, may need to be kept “in place” Difficult to integrate into a common spatial frames Species observation data Climate simulation output data Land use data Satellite observations Lake ecology data sets are: Real world sensor inputs In-person observations (e.g. Lake Observer App on Play Store/App Store) Simulation output from 1D model  Data can be shared, but limited sharing is normal

Virtual images sharing Building on the International Development of PRAGMA’s Multi-Cloud Testbed Goal: a persistent cloud testbed for PRAGMA members to run experiments. Raw image Universal Clonezilla Live VM Pragma_boot Local Virtual Images Repository Tools: CZISO - enables sharing virtual cluster images, leverages UCSD’s unlimited Google storage Clonezilla (NCHC, Taiwan) to convert images to different formats (raw, qcow, zvol) Cloud Scheduler - enhanced interface to boot/manage virtual clusters Cloud Scheduler NCHC Taiwan AIST Japan NAIST UCSD USA Virtual images sharing Needed - Mention of how GRAPLEr Relates - Mention of how this drives / uses technologies developed elsewhere in PRAGMA - Mention of international sites / partners involved Pragma_boot – deploys virtual cluster images across PRAGMA physical sites using local network configuration settings

Our Approach Use “Overlay” Networks to provide a trusted environment for focused sharing of resources

General Lake Model (GLM ) through GRAPLEr GRAPLEr Web Service Expose simple R desktop interface to users Handle packaging of multiple simulations into Condor job, compression, job submission, post-processing receive Web service HTCondor send Internet R, plus GRAPLEr library IPOP overlay network UF VT … GRAPLEr UW Azure PRAGMA GRAPLEr: A Lake Ecology Distributed Collaborative Environment Integrating Overlay Networks, High-throughput Computing and Web Services, Subratie, K, Aditya, S, Figueiredo, RJ, Carey, C, Hanson, P

Impacts Branch of fresh-water ecology dramatically changed From statistical analysis only to analysis + simulation 10K+ simulations (parameter sweep) for a 5 year analysis (accomplished overnight with HTCondor) Capability being brought into the classroom by PI Cayelan Carey at Virginia Tech Also experimenting in a hybrid networking space

Overlay and SDN Hybrid for Seamless High Performance Virtual Network Kyuho Jeong (UF) Kohei Ichikawa (NAIST) Renato Figueiredo (UF)

IPOP Recap (nested) VM1 eth0 (nested) VM2 eth0 IPOP (nested) VM3 eth0 (nested) VM4 eth0 Create direct P2P virtual tunnel across multiple NAT/Firewall using ICE protocol Each virtual tunnel is mapped to either MAC or IP address Tap device to interface with O/S network stack Packets are encapsulated by IP/UDP header. Runs in application layer N2N Encryption (DTLS – Datagram Transport Layer Sec) IPOP IPOP TAP IPOP TAP veth0 veth1 veth0 veth1 L2 Bridge L2 Bridge NAT eth0 NAT eth0 Host Host NAT NAT IPOP Overlay

IPOP Deployment across Internet VM container container container VM container container container Layer 2 Virtualization Network Virtual layer Physical layer Amazon EC2 container IPOP container IPOP Open vSwitch Compute instance (VM) Open vSwitch Compute instance (VM) container IPOP Open vSwitch Cloud Network (Layer 3) Compute instance (VM) Cloud Network (Layer3) IPOP Public Internet container Open vSwitch Google Compute Engine Compute instance (VM) Overlay Datapath Compute instance (VM) Compute instance (VM) SDN Datapath Cloud Network (Layer 3) PRAGMA Cloud/Cloudlab

PRAGMA on the Web Home: http://www.pragma-grid.org Github: http://github.com/pragmagrid