Ilia Baldine, Jeff Chase, Mike Zink, Max Ott.  14 GPO-funded racks ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M3/M4 servers  1x146GB 10K.

Slides:



Advertisements
Similar presentations
DiCloud and ViSE Cluster D Session November 2 nd, 2010.
Advertisements

ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
The Instageni Initiative
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Workshop on Conducting Research on LEARN GENI-LEARN Integration using ORCA Deniz Gurkan, Charles Chambers, Tesfaye Kumbi, Maanasa Madiraju, and Karthik.
Compute Aggregate 1 must advertise this link. We omit the physical port on the switch to which the node is directly connected. Network Aggregate Links.
GIMI I&M and Monitoring Mike Zink, Max Ott, Ilya Baldine University of Massachusetts Amherst GEC 18, Brooklyn, October 27 st 1.
ORCA Status Report and Roadmap GEC8 Ilia Baldine.
ExoGENI Racks Ilia Baldine
ORCA Overview LEARN Workshop Ilia Baldine, Anirban Mandal Renaissance Computing Institute, UNC-CH.
GEC21 Experimenter/Developer Roundtable (Experimenter) Paul Ruth RENCI / UNC Chapel Hill
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Sponsored by the National Science Foundation GENI Alpha Demonstration Nowcasting: UMass/CASA Weather Radar Demonstration Mike Zink, David Irwin LEARN Workshop,
Ilya Baldin 2.
Sponsored by the National Science Foundation A Virtual Computer Networking Lab Mike Zink, Max Ott, Jeannie Albrecht GEC 23, June 16 th 2015.
Sponsored by the National Science Foundation DICloud Spiral 2 Year-end Project Review University of Massachusetts Amherst PI: Michael Zink, Prashant Shenoy.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Sponsored by the National Science Foundation Campus/Experiment Topics in Monitoring and I&M GENI Engineering Conference 15 Houston, TX Sarah Edwards Chaos.
GIMI Tutorial GIMI Team University of Massachusetts Amherst GEC 14, Boston, July 9 th 1.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Sponsored by the National Science Foundation GENI and Cloud Computing Niky RIga GENI Project Office
Sponsored by the National Science Foundation GENI I&M Workshop GIMI: Large-scale GENI Instrumentation and Measurement Infrastructure Mike Zink November.
June 11, 2012 Troy Bleeker. Agenda Participants will learn A cloud computing recap. What is our cloud like and why do we have it? Lab: VPN, IDs, shared.
Sponsored by the National Science Foundation Introduction to GENI Sarah Edwards GENI Project Office (GPO)
Resource Representations in GENI: A path forward Ilia Baldine, Yufeng Xin Renaissance Computing Institute,
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
Sponsored by the National Science Foundation GENI Current Ops Workflow Connectivity John Williams San Juan, Puerto Rico Mar
Condor in Networked Clouds Ilia Baldine, Yufeng Xin,, Anirban Mandal, Chris Heermann, Paul Ruth, Jeffery L.Tilson RENCI, UNC-CH Jeff Chase, Victor J. Orlikowski,
Sponsored by the National Science Foundation LabWiki Tutorial (OMF/OML) Divya Bhat, Mike Zink, Pieter Becue, Brecht Vermeulen GEC20 July 8 th 2014, Ghent,
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
Sponsored by the National Science Foundation Tutorial: OpenFlow in GENI with Instrumentation and Monitoring Divya Bhat Shufeng Huang Niky Riga GENI Project.
Sponsored by the National Science Foundation ExoGENI
GIMI I&M and Monitoring Mike Zink University of Massachusetts Amherst GEC 15, Houston, October 23 rd 1.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
GIMI Update Mike Zink University of Massachusetts Amherst GEC 13, Los Angeles, March 13 th 1.
Using GENI for computational science Ilya Baldin RENCI, UNC – Chapel Hill.
Sponsored by the National Science Foundation 1 GEC16, March 21, 2013 Are you ready for the tutorial? 1.Did you do the pre-work? A.Are you able to login.
Sponsored by the National Science Foundation Meeting Introduction: Integrating GENI Networks with Control Frameworks Aaron Falk GENI Project Office June.
From imagination to impact © 2009 NICTA. All Rights Reserved. Max Ott for the TEMPO NICTA OML Update Instrumentation and Measurement.
Sponsored by the National Science Foundation 1 Nov 4, 2010 Cluster-D Mtg at GEC9 Tue, Nov 2, 12noon – 4:30pm Meeting Chair: Ilia Baldine (RENCI) –System.
Sponsored by the National Science Foundation A Virtual Computer Networking Lab Mike Zink, Jim Kurose, Max Ott, Jeannie Albrecht NSF Workshop on GENI in.
Data-Intensive Cloud Control for GENI Cluster D Session July 20 th, 2010.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation A Virtual Computer Networking Lab Mike Zink, Max Ott, Jeannie Albrecht GEC 20, March 24 th 2015.
GIMI Tutorial GIMI Team GEC 16, Salt Lake City, March 19 th 1.
Sponsored by the National Science Foundation ORCA-BEN, ORCA-AUG Spiral 2 Year-end Project Review RENCI UNC-CH, Duke University PI: Ilia Baldine, Jeff Chase.
UW-Madison GEC 16 Update. GENI WiMAX classroom experience CS 407 – Foundations of Mobile Systems and Applications – 80 undergrad students Students required.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
Sponsored by the National Science Foundation WiMAX Spiral 2 Year-end Project Review Rutgers University PI: Dipankar Raychaudhuri, WINLAB Rutgers University.
Sponsored by the National Science Foundation 1 Nov 4, 2010 WiMAX Deployment Roadmap for Spiral 3 Harry Mussman (GPO) Includes the following goals and milestones.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
ORCA-BEN I.Baldine, J.Chase. Progress so far Deployed ORCA into BEN Demonstrated provisioning of VLANs across BEN using ORCA Developed drivers for BEN.
GIMI Update Mike Zink University of Massachusetts Amherst GEC 14, Boston, July 9 th 1.
Spiral 3 Goals for WiMAX Projects Install and bring up WiMAX base station kit, including the ASN Gateway. (GEC10, or earlier) (Case 1) Install and bring.
Use of HLT farm and Clouds in ALICE
Introduction to GENI Ben Newton University of North Carolina at Chapel Hill
GENI Exploring Networks of the Future
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
A Virtual Computer Networking Lab
Network+ Guide to Networks 6th Edition
Accelerated Computing in Cloud
NSF cloud Chameleon: Phase 2 Networking
GENI Exploring Networks of the Future
ORBIT Radio Grid Testbed – Project Highlights Nov 3, 2010
Nolan Leake Co-Founder, Cumulus Networks Paul Speciale
Presentation transcript:

Ilia Baldine, Jeff Chase, Mike Zink, Max Ott

 14 GPO-funded racks ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M3/M4 servers  1x146GB 10K SAS hard drive +1x500GB secondary drive  48G RAM  Dual-socket 8-core CPU w/ Sandy Bridge  10G dual-port Chelseo adapter ◦ BNT G/40G OpenFlow switch ◦ DS3512 6TB sliverable storage  iSCSI interface for head node image storage as well as experimenter slivering  Each rack is a small networked cloud ◦ OpenStack-based ◦ EC2 nomenclature for node sizes (m1.small, m1.large etc) ◦ Interconnected by combination of dynamic and static L2 circuits through regionals and national backbones  2

 3 racks deployed ◦ RENCI, GPO and NICTA  2 existing racks ◦ Duke and UNC  2 more racks coming ◦ FIU and UH  Connected via BEN ( LEARN and NLR FrameNet, I2http://ben.renci.org 3

 Strong isolation is the goal  Compute instances are KVM based and get a dedicated number of cores  VLANs are the basis of connectivity ◦ VLANs can be best effort or bandwidth-provisioned (within and between racks) 4

Persistent Server 6 Tutorial VM RC ML RC ML OML Server AM iRODS XMPP Server EC Visualization iRODS Client RC ML RENCI BBN IREEL

 iRODS ◦ Integrated Rule-Oriented Data System that aims at managing distributed massive data  IREEL ◦ Internet Remote Emulation Experiment Laboratory ◦ Measurement Portal  OMF/OML ◦ Orbit Measurement Framework ◦ Orbit Measurement Library