Gurcharan S. Khanna Director of Research Computing RIT

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

Supercomputing Institute for Advanced Computational Research © 2009 Regents of the University of Minnesota. All rights reserved. The Minnesota Supercomputing.
School of Computing 13 January 2009Enterprise Kickoff1 Cyberinfrastructure at Clemson Dr. D. E. (Steve) Stevenson Institute for Modeling and Simulation.
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Approaching a Data Center Guy Almes meetings — Tempe 5 February 2007.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
© The Trustees of Indiana University Centralize Research Computing to Drive Innovation…Really Thomas J. Hacker Research & Academic Computing University.
Virtual Desktops and Flex CSU-Pueblo Joseph Campbell.
Center For Research Computing (CRC), University of Notre Dame, Indiana Application of ND CRC to be a member of the OSG Council Jarek Nabrzyski CRC Director.
DuraCloud A service provided by Sandy Payette and Michele Kimpton.
GridPP Tuesday, 23 September 2003 Tim Phillips. 2 Bristol e-Science Vision National scene Bristol e-Science Centre Issues & Challenges.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
and beyond Office of Vice President for Information Technology.
What is CANARIE? CANARIE runs Canada’s only national high-bandwidth network for research & education Connects one million users at 1,100 institutions.
Research Computing Working Group Brainstorming session with Ken.
BalticGrid-II Project MATLAB implementation and application in Grid Ilmars Slaidins, Lauris Cikovskis Riga Technical University AHM Riga May 12-14, 2009.
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Cyberinfrastructure Planning at NSF Deborah L. Crawford Acting Director, Office of Cyberinfrastructure HPC Acquisition Models September 9, 2005.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
NMS Case Study HP OpenView Network Node Manager Hong-taek Ju DP&NM Lab. Dept. of Computer Science and Engineering POSTECH, Pohang Korea Tel:
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Center for Research Computing at Notre Dame Jarek Nabrzyski, Director
Purdue Campus Grid Preston Smith Condor Week 2006 April 24, 2006.
Overview of the Texas Advanced Computing Center and International Partnerships Marcia Inger Assistant Director Development & External Relations April 26,
Introduction to Microsoft Windows 2000 Welcome to Chapter 1 Windows 2000 Server.
08/05/06 Slide # -1 CCI Workshop Snowmass, CO CCI Roadmap Discussion Jim Bottum and Patrick Dreher Building the Campus Cyberinfrastructure Roadmap Campus.
Terminal Servers in Schools A second life for your older computers.
Internet 2 Applications Update Ted Hanss 8 October 1997 Washington D.C. Ted Hanss 8 October 1997 Washington D.C.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Southeastern Universities Research Association (SURA) - Intro for Fed/Ed 18 Mary Fran Yafchak Senior Program Manager, IT
Internet2 AdvCollab Apps 1 Access Grid Vision To create virtual spaces where distributed people can work together. Challenges:
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
End to End Performance Initiative . Context for E2E Performance High performance backbones are in place Now, under certain conditions within particular.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Metropolitan Network Memorial Art Gallery GigaPOP UR Medical Center & Affiliates Outpatient Health Care Eastman School of Music UR Cardiovascular Research.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
1 Virtual Collaboratory: How Climate Research can be done Collaboratively using the Internet U.S. – China Symposium and Workshop on Climate Variability,
2009 Lynn Sutherland February 4, From advanced networks to economic development WURCNet – Western University Research Consortium.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Tools and Services Workshop
Joslynn Lee – Data Science Educator
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Grid Computing.
Putting All The Pieces Together: Developing a Cyberinfrastructure at the Georgia State University Library Tim Daniels, Learning Commons Coordinator Doug.
Mary Fran Yafchak Senior Program Manager, IT
File Manager for Microsoft Office 365, SharePoint, and OneDrive: Extensible Via Custom Connectors in Enterprise Deployments, Ideal for End Users OFFICE.
2018 NPAPW Ann Doyle, Internet2
Sun Ray SPARC™ CPU 1280 x 1024 x 24 frame buff 10/100Mb Ethernet
Presentation transcript:

Gurcharan S. Khanna Director of Research Computing RIT

Mission Help researchers do better research via - CyberInfrastructure Collaboration Community

Vision CI - Friendly, integrated research environment Community - Persistent Connected Communities Collaboration - Easy, productive relationships

Current CyberInfrastructure (Startup systems) Large memory computer - 32 GB RAM, 4 cores, Opteron HPC cluster - IBM 52 nodes, 2 cores, P3, 512MB RAM Fileserver + Backup combined basic - ~ 6 TB of fast disk HTC clients across campus under Condor (student help) Network - 10 GigE core, 1 GigE to research center desktops

Strategic Issues too few users with scaleable apps lack of accessible interfaces, middleware high initial demand per user high initial investment per user

Tactical Options share resources, users partner with industry, state use grid communities e.g., NYSGrid, Open Science Grid, SURAgrid, Great Plains Network, TeraGrid

R & D HPC cluster backplane (CISCO) MPI Spanning clusters (CISCO) High Speed filesystems (CISCO, NETERION) SUN (Loaner SunFire x4500 “Thumper” & Sun 10 GigE NICs w/ Lustre filesystem connected to Indiana U. Data Capacitor on TeraGrid) SUN (Acquiring two “Thumpers” for 98 TB of high perf data servers*) High Speed networks (CISCO, NETERION, NYSERNET, Internet2, Georgia State) *Choquette

CI Vision Research Computing shared central systems RIT campus grid of mixed resources New York State grid of institutions such as Buffalo, RPI, Stony Brook, Cornell global grid community of Open Science Grid, TeraGrid, SURAgrid, Great Plains Network, and others

Current Collaboration Infrastructure ICELab - Interactive Collaboration Environments Lab in the Center for Advancing the Study of Cyberinfrastructure Mission - R&D, Teaching&Learning, Events, Evaluation 5 “enhanced” Access Grid nodes on campus partnership with Sun AEG program for 3 Ultra40 workstations* Center for Imaging Science, Color Science, CASCI, IT Collaboratory Student Co-op staffing *Choquette

Strategic Goals Build Connected Communities Foster Collaboration Outreach

Projects Philadelphia Orchestra Live Broadcasts to RIT over Internet2 Virtual Theatre live performances via Access Grid between RIT and U. Utah East High School, Rochester, Access Grid for Ghana, UK student partners

Tactical Plan for ImagineRIT 12 “CyberPortals” on campus - 8 colleges, Library, Student Alumni Union, CIMS, Field House 3 in Rochester - Strong Memorial Hospital, East High School, School of the Arts 3 at RIT campuses abroad - Kosovo, Dubrovnik, Dubai

R & D Uncompressed high definition video over IP CISCO, iHDTV (U. Washington) and UltraGrid (McGill, ISI) New network protocols for 10 GigE and 40 GigE Georgia State U., CISCO, NYSERNet, Internet2 GUI development to access multipoint streams open Compute engines to handle high network loads open Restricted audio in public spaces NTID *NATIONAL TECHNICAL INSTITUTE FOR THE DEAF at RIT

Collaboration & Community Vision Advanced Network Infrastructure for Collaboration Unified RIT campus Connectivity to Rochester regional partners Global interactions - ad hoc/planned/persistent