Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.

Slides:



Advertisements
Similar presentations
WS-JDML: A Web Service Interface for Job Submission and Monitoring Stephen M C Gough William Lee London e-Science Centre Department of Computing, Imperial.
Advertisements

The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
Grid Deployments and Cyberinfrastructure Andrew J. Younge 102 Lomb Memorial Drive Rochester, NY 14623
CGW 2009 Vine Toolkit A uniform access and portal solution to existing grid middleware services P.Dziubecki, T.Kuczynski, K.Kurowski, D.Szejnfeld, D.Tarnawczyk,
Data Grids Darshan R. Kapadia Gregor von Laszewski
Condor-G: A Computation Management Agent for Multi-Institutional Grids James Frey, Todd Tannenbaum, Miron Livny, Ian Foster, Steven Tuecke Reporter: Fu-Jiun.
A Computation Management Agent for Multi-Institutional Grids
Universität Dortmund Robotics Research Institute Information Technology Section Grid Metaschedulers An Overview and Up-to-date Solutions Christian.
Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 1, pp For educational use only.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Grids and Globus at BNL Presented by John Scott Leita.
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Globus 4 Guy Warner NeSC Training.
Kate Keahey Argonne National Laboratory University of Chicago Globus Toolkit® 4: from common Grid protocols to virtualization.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
E-Science Workflow Support with Grid-Enabled Microsoft Project Gregor von Laszewski and Leor E. Dilmanian, Rochester Institute of Technology Abstract von.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
- 1 - Grid Programming Environment (GPE) Ralf Ratering Intel Parallel and Distributed Solutions Division (PDSD)
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
Towards a Javascript CoG Kit Gregor von Laszewski Fugang Wang Marlon Pierce Gerald Guo
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
DISTRIBUTED COMPUTING
Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York
CoG Kit Overview Gregor von Laszewski Keith Jackson.
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
23:48:11Service Oriented Cyberinfrastructure Lab, Grid Portals Fugang Wang April 29
INFSO-RI Module 01 ETICS Overview Alberto Di Meglio.
GridFE: Web-accessible Grid System Front End Jared Yanovich, PSC Robert Budden, PSC.
Towards a unified Cyberaide architecture Fugang Wang May 29, 2009.
INFSO-RI Module 01 ETICS Overview Etics Online Tutorial Marian ŻUREK Baltic Grid II Summer School Vilnius, 2-3 July 2009.
© DATAMAT S.p.A. – Giuseppe Avellino, Stefano Beco, Barbara Cantalupo, Andrea Cavallini A Semantic Workflow Authoring Tool for Programming Grids.
11 CORE Architecture Mauro Bruno, Monica Scannapieco, Carlo Vaccari, Giulia Vaste Antonino Virgillito, Diego Zardetto (Istat)
1 Geospatial and Business Intelligence Jean-Sébastien Turcotte Executive VP San Francisco - April 2007 Streamlining web mapping applications.
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Acknowledgement: NSF NMI, CMMI, DDDAS
SBIR Final Meeting Collaboration Sensor Grid and Grids of Grids Information Management Anabas July 8, 2008.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
1 Grid Portal for VN-Grid Cu Nguyen Phuong Ha. 2 Outline Some words about portals in principle Overview of OGCE GridPortlets.
Rochester Institute of Technology Cyberaide Shell: Interactive Task Management for Grids and Cyberinfrastructure Gregor von Laszewski, Andrew J. Younge,
EC-project number: Universal Grid Client: Grid Operation Invoker Tomasz Bartyński 1, Marian Bubak 1,2 Tomasz Gubała 1,3, Maciej Malawski 1,2 1 Academic.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
INFSO-RI JRA 1 Testbed Management Technologies Alain Roy (University of Wisconsin-Madison, USA) ETICS 2 Final Review Brussels - 11 May 2010.
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Link to presentation on wiki 12:13:33Service Oriented Cyberinfrastructure.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Ad Hoc VO Akylbek Zhumabayev Images. Node Discovery vs. Registration VO Node Resource User discover register Resource.
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
GridChem Architecture Overview Rion Dooley. Presentation Outline Computational Chemistry Grid (CCG) Current Architectural Overview CCG Future Architectural.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Globus: A Report. Introduction What is Globus? Need for Globus. Goal of Globus Approach used by Globus: –Develop High level tools and basic technologies.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
INFSO-RI JRA2 Test Management Tools Eva Takacs (4D SOFT) ETICS 2 Final Review Brussels - 11 May 2010.
© Geodise Project, University of Southampton, Workflow Support for Advanced Grid-Enabled Computing Fenglian Xu *, M.
Collaborative Tools for the Grid V.N Alexandrov S. Mehmood Hasan.
OGSA-DAI.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Shaowen Wang 1, 2, Yan Liu 1, 2, Nancy Wilkins-Diehr 3, Stuart Martin 4,5 1. CyberInfrastructure and Geospatial Information Laboratory (CIGI) Department.
Globus —— Toolkits for Grid Computing
Similarities between Grid-enabled Medical and Engineering Applications
Service Oriented Architecture (SOA)
Presentation transcript:

Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain knowledge that was previously unobtainable in a wide variety of scientific domains. However the ability for new scientists to become involved with Grid computing has become increasingly challenging due to its complex nature. This high entry barrier in turn limits the use of Grid computing and advanced cyberinfrastructure as it is unable to trickle down to a more general scientific user group. For example, absence of developer teams in smaller research projects requires the scientist to become an expert in Grid computing themselves. In a typical Grid environment there are obstacles that need to be dealt with such as task management, job scheduling, resource monitoring, and organization management which can overwhelm a domain scientist. Many of these utilities could be abstracted to simplify the scientific user's experience and enable them to perform their research in a more efficient and productive environment. Our solution is Cyberaide Shell, an advanced but simple to use system shell which provides access to the powerful cyberinfrastructure available today [1]. We abstract resources, tasks, and application management tools through a scriptable command line interface. Through a service integration mechanism, the shell's functionality is exposed to a wide variety of frameworks and programming languages to allow for the scientist to leverage the power of Grid computing within their own applications. Design Conclusion Cyberaide Shell Andrew J. Younge, Xi He, Fugang Wang, and Gregor von Laszewski Rochester Institute of Technology, 102 Lomb Memorial Drive, Rochester, NY High-level Components Architectural Model References The Cyberaide Shell contains four high-level design components that make it unique when compared with any other current technology. The four components are cyberinfrastructure deployments, command line interpreters, object management systems and services. The design of Cyberaide Shell contains components to enable access to a variety of new cyberinfrastructure, Grid, and Cloud toolkits and services. We have designed an abstraction framework that will make it possible to integrate these and other backends for future cyberinfrastructure needs. Users and high-level applications interface with Cyberaide Shell through its standardized command line interface (CLI). This interface allows users to have an easy way to manage jobs, resources, and users. As another component to Cyberaide Shell, we provide within our shell a new concept that we term Literate Semantic Objects (LSO), which is used as part of the internal design of the Cyberaide Shell. These objects are semantic, as an identifier can easily specify them, but the way an operation is applied to these objects depends on the attributes at runtime. In Cyberaide Shell, all advanced functionality is exposed as a service. This enables other computing frameworks to easily access our shell through independent entry point via another tool or even another programming language. Based on the high-level design goals we want to interact with the Cyberaide Shell in a variety of ways. To support this, we follow a layered architectural service and component based design where different services interact with each other in predefined channels. The architectural layers consist of a resource layer, a shell service layer, a language layer and a user layer. [1] I. Foster, “The anatomy of the Grid: Enabling scalable virtual organizations,” International Journal of High Performance Computing Applications, vol. 15, no. 3, pp. 200–222, August [Online]. Available: [2] I. Foster and C. Kesselman, “Globus: A Metacomputing Infrastructure Toolkit,” International Journal of Supercom- puter Applications, vol. 11, no. 2, pp. 115–128, 1997, ftp://ftp.globus.org/pub/globus/papers/globus.pdf. [3] F. Berman, “From TeraGrid to knowledge grid,” Communications of the ACM, vol. 44, no. 11, pp. 27–28, The current project state has a functioning Cyberaide Shell prototype with the main components in place. The command line interface is implemented using the Apache CLI module to easily extend the shell environment unlike any before. Task management and job execution is currently supported through the SSH and Globus Toolkit [2] resources with integration with the TeraGrid [3]. The shell is extended through a secure Web service platform using Apache CXF where there are clients implemented in Java, C#, Python, Ruby, and Matlab for scientists to embed within their own applications. Additional features such as additional language bindings, workflow commands, and cyberinfrastructure frameworks are currently under development. The current prototype of Cyberaide Shell supports two resource types for remote job execution, performed by the submit and execution commands. The first resource type is using the Globus Toolkit [2] to access the TeraGrid [3]. Login is managed through MyProxy, a x509 credential management service that works with the Grid Security Infrastructure (GSI). Job submission and monitoring is performed by the globusrun-ws command, which uses the WS-GRAM protocol to submit jobs to a variety of batch queuing systems, such as Condor or PBS. The other job execution resource is through SSH. The SSH extension logs into a remote computer and spawns off a job as a new process. Monitoring the status and getting the results are done by separate login attempts as requested by the system. Cyberaide Shell was created to help overcome the chal-lenges scientists face when using advanced cyberinfrastruc- ture in today’s complex computing environment. Cyberaide Shell overcomes the challenges outlined in this paper through a variety of novel concepts. This includes the definition and use of Literate Semantic Objects for dynamic data manage- ment, complex interaction with TeraGrid resources to leverage advanced cyberinfrastructure, and a new CLI that is familiar to most users yet easily extendible by advanced developers. A working prototype has been created that outlines the potential of Cyberaide Shell and proves the plausibility of our design. Advanced but simple to use tools are needed to further enhance the usability of Grid computing for application users. This may lower the entry barrier to Grid computing like never before. If this enhanced usability is created, it could cause an explosion in the scientific community by making Grid computing practical to a larger user group. Introduction Implementation Example