Integrate access to advanced computational resources and high-level services (resource scheduling, automated data management) to accelerate and improve.

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

3 September 2004NVO Coordination Meeting1 Grid-Technologies NVO and the Grid Reagan W. Moore George Kremenek Leesa Brieger Ewa Deelman Roy Williams John.
The Telescience Project - ATOMIC (Applications to Middleware Interaction Components) Transparent Grid Access for Scientific.
Kensington Oracle Edition: Open Discovery Workflow Meets Oracle 10g Professor Yike Guo.
AHM Overview OptIPuter Overview Third All Hands Meeting OptIPuter Project San Diego Supercomputer Center University of California, San Diego January 26,
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Robust Tools for Archiving and Preserving Digital Data Joseph JaJa, Mike Smorul, and Mike McGann Institute for Advanced Computer Studies Department of.
DANSE Central Services Michael Aivazis Caltech NSF Review May 23, 2008.
Mike Smorul Saurabh Channan Digital Preservation and Archiving at the Institute for Advanced Computer Studies University of Maryland, College Park.
UMIACS PAWN, LPE, and GRASP data grids Mike Smorul.
Simo Niskala Teemu Pasanen
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Computer Science and Engineering A Middleware for Developing and Deploying Scalable Remote Mining Services P. 1DataGrid Lab A Middleware for Developing.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
Biology.sdsc.edu CIPRes in Kepler: An integrative workflow package for streamlining phylogenetic data analyses Zhijie Guan 1, Alex Borchers 1, Timothy.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Developing Reusable Software Infrastructure – Middleware – for Multiscale Modeling Wilfred W. Li, Ph.D. National Biomedical Computation Resource Center.
DISTRIBUTED COMPUTING
Ocean Observatories Initiative Common Execution Infrastructure (CEI) Overview Michael Meisinger September 29, 2009.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Topaz : A GridFTP extension to Firefox M. Taufer, R. Zamudio, D. Catarino, K. Bhatia, B. Stearn University of Texas at El Paso San Diego Supercomputer.
DANSE Central Services Michael Aivazis Caltech NSF Review May 31, 2007.
Mark Ellisman, Ph.D. Professor of Neurosciences and Bioengineering Director, BIRN Coordinating Center Center for Research on Biological Systems University.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
Crystal25 Hunter Valley, Australia, 11 April 2007 Crystal25 Hunter Valley, Australia, 11 April 2007 JAINIS (JCU and Indiana Instrument Services): A Grid.
Telescience for Advanced Tomography Applications 1 Telescience featuring IPv6 enabled Telemicroscopy Steven Peltier National Center for Microscopy and.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Presented by: La Min Htut Saint Petersburg Marine Technical University Authors: Alexander Bogdanov, Alexander Lazarev, La Min Htut, Myo Tun Tun.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
All Hands Meeting 2005 BIRN Portal Architecture: Security Jana Nguyen
Presented by Scientific Annotation Middleware Software infrastructure to support rich scientific records and the processes that produce them Jens Schwidder.
GO-ESSP Workshop, LLNL, Livermore, CA, Jun 19-21, 2006, Center for ATmosphere sciences and Earthquake Researches Construction of e-science Environment.
SAN DIEGO SUPERCOMPUTER CENTER Inca TeraGrid Status Kate Ericson November 2, 2006.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Presented by Jens Schwidder Tara D. Gibson James D. Myers Computing & Computational Sciences Directorate Oak Ridge National Laboratory Scientific Annotation.
GCRC Meeting 2004 BIRN Coordinating Center Software Development Vicky Rowley.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Advances Toward Economic and Efficient Terabit LANs and WANs Cees de Laat Advanced Internet Research Group (AIRG) University of Amsterdam.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Partnerships in Innovation: Serving a Networked Nation Grid Technologies: Foundations for Preservation Environments Portals for managing user interactions.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
NA-MIC National Alliance for Medical Image Computing UCSD / BIRN Coordinating Center NAMIC Group Site PI: Mark H. Ellisman Site Project.
14, Chicago, IL, 2005 Science Gateways to DEISA Motivation, user requirements, and prototype example Thomas Soddemann, RZG, Germany.
All Hands Meeting 2003 BIRN Portal Abel W. Lin. Overview Outline: Purpose of BIRN Portal Architecture Interaction with Grid Tools Functionality Current.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Cyberinfrastructure Overview of Demos Townsville, AU 28 – 31 March 2006 CREON/GLEON.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Grid Services for Digital Archive Tao-Sheng Chen Academia Sinica Computing Centre
Shaowen Wang 1, 2, Yan Liu 1, 2, Nancy Wilkins-Diehr 3, Stuart Martin 4,5 1. CyberInfrastructure and Geospatial Information Laboratory (CIGI) Department.
Joseph JaJa, Mike Smorul, and Sangchul Song
Shaowen Wang1, 2, Yan Liu1, 2, Nancy Wilkins-Diehr3, Stuart Martin4,5
Patrick Dreher Research Scientist & Associate Director
Standard Portlet Architecture Supports Reusable Components
UCSD / BIRN Coordinating Center NAMIC Group
Data Management Components for a Research Data Archive
gLite The EGEE Middleware Distribution
Presentation transcript:

Integrate access to advanced computational resources and high-level services (resource scheduling, automated data management) to accelerate and improve each workflow process Target (extreme) cells that have multiple regions of structural and functional specialization spanning multiple scales (e.g. striatal medium spiny neuron, cerebellar purkinje neuron) Accelerating Throughput and Quality for Multiscale Cell Modeling

Telescience Portal and Applications Physical Resources: Data Storage, Compute Resources, etc. Telescience Builds Upon Grid Launching applications on the Telescience Grid requires the coordination of many different services across many logical layers Local (Services): GSI, Globus, Condor, RLS, SRB, NWS, etc. Collective (Services): MyProxy, Pegasus, DataCutter, GridFTP, etc. Higher Level Services: GAMA, GridWrap, etc.

Telescience Portal and Applications Telescience provides solutions for both users and developers Provides an intuitive GUI for end-users to launch jobs and manage data Higher Level Services: GAMA, GridWrap, etc. Provides a programming interface and other tools for developers (Telescience Portal and others) to access Telesience Grid (compatible with other Scientific Grids).

Application to Middleware Interaction Components (ATOMIC) Lin AW., et al. IEEE CBMS 2005 ATOMIC: GAMA, GridWrap, etc. Higher Level Services have been formally packaged into distributable and extensible software components, named ATOMIC Current ATOMIC tools: TeleAuth/GAMA TeleWrap TeleRun Bi-annual release (April, Sept)

Segmentation Tools UTAH SciRun JINX Fuzzy Segmentation

Telescience Portal Single Sign-on, web Interface for ubiquitous access to tools (applications, data, instruments)

Why portlets? GridSphere/Portlet Advantages: Similar look-and-feel to other "Portals" (ie. MyYahoo!) Robust development environment (JSR168 Compliance)

Simplifying a Complicated Process Advantages of workflow managed by Portal: Applications are centralized to a common interface Automatic and transparent data management Appropriate tools have been merged into single applications

Robust Development Environment Rapidly develop and deploy portlets for LM workflow Begin: Image to Database, 6 people, 4 platforms, 4 hrs computation + high overhead End: Image to Database, 1 person, 1 portal, 5 min computation, low overhead LM Worfkflow by Josh Tran Features Jibber/Jetsam by Stephan Lamont

Effective use of computational resources Increasing computation requirements have lead to a more effective use of distributed resources Creation of Telescience Certificate Authority (CA) better management of user sign-on 1st external CA to be recognized by national infrastructure projects (TeraGrid, OptIPuter) Condor 6.7.x (current beta) implementation: more efficient permits "real-time" computation cluster pool management connects to larger scale infrastructure projects (i.e. TeraGrid, OptIPuter)

Accelerating and Generalizing Computation Gtomo (Simple Back Projection) used to prototype a new architecture to separate algorithms from parallelism or "Grid" codes: Uses Chimera/Pegasus framework Several successful launches Beta testing at NCMIR/ISI Stress tested w/ 4k x 4k x 4k dataset Several high performance codes/applications: Simple Backprojection TxBR Quadratic Reconstruction Deconvolution Integrated w/ Telescience Portal simple "click-and-compute" interface transparent authentication management automatic progress tracking transparent meta-data collection Telescience tools allow developers to focus on the science of the algorithms not on the Grid

Scalable Displays Allow Both Global Content and Fine Detail Source: Mark Ellisman, David Lee, Jason Leigh 30 MPixel SunScreen Display Driven by a 20-node Sun Opteron Visualization Cluster

Two New Calit2 Buildings Will Provide a Persistent Collaboration “Living Laboratory” Over 1000 Researchers in Two Buildings Will Create New Laboratory Facilities –Nano, MEMS, RF, Optical, Visualization International Conferences and Testbeds 150 Optical Fibers into UCSD Building Bioengineering UC San Diego UC Irvine Preparing for an World in Which Distance Has Been Eliminated…

Toward an Interactive Gigapixel Display Scalable Adaptive Graphics Environment (SAGE) Controls: 100 Megapixels Display –55-Panel 1/4 TeraFLOP –Driven by 30- Node Cluster of 64-bit Dual Opterons 1/3 Terabit/sec I/O –30 x 10GE interfaces –Linked to OptIPuter 1/8 TB RAM 60 TB Disk Source: Jason Leigh, Tom DeFanti, OptIPuter Co-PIs NSF LambdaVision Calit2 is Building a LambdaVision Wall in Each of the UCI & UCSD Buildings