Personal Cloud Controller (PCC) Yuan Luo 1, Shava Smallen 2, Beth Plale 1, Philip Papadopoulos 2 1 School of Informatics and Computing, Indiana University.

Slides:



Advertisements
Similar presentations
Ravi Sankar Technology Evangelist | Microsoft
Advertisements

1 Applications Virtualization in VPC Nadya Williams UCSD.
PRAGMA Resources Group Updates Philip Papadopoulos (Interim Working Group Lead) Reporting on SIGNIFICANT WORK by a LARGE Cast of Researchers.
Lifemapper Provenance Virtualization
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
LUNARC, Lund UniversityLSCS 2002 Transparent access to finite element applications using grid and web technology J. Lindemann P.A. Wernberg and G. Sandberg.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
First steps implementing a High Throughput workload management system Massimo Sgaravatto INFN Padova
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machine Universe in.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Condor Project Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
Cindy Zheng, Pragma Cloud, 3/20/2013 Building The PRAGMA International Cloud Cindy Zheng For Resources Working Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
HTCondor at the RAL Tier-1 Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
Grid Appliance – On the Design of Self-Organizing, Decentralized Grids David Wolinsky, Arjun Prakash, and Renato Figueiredo ACIS Lab at the University.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Condor Birdbath Web Service interface to Condor
Simplifying Resource Sharing in Voluntary Grid Computing with the Grid Appliance David Wolinsky Renato Figueiredo ACIS Lab University of Florida.
Sandor Acs 05/07/
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
A Hierarchical MapReduce Framework Yuan Luo and Beth Plale School of Informatics and Computing, Indiana University Data To Insight Center, Indiana University.
Reports on Resources Breakouts. Wed. 11am – noon - Openflow demo rehearsal - Show physical information. How easily deploy/configure OpenvSwitch to create.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Report from USA Massimo Sgaravatto INFN Padova. Introduction Workload management system for productions Monte Carlo productions, data reconstructions.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Quill / Quill++ Tutorial.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Resources Working Group Update Cindy Zheng (SDSC) Yoshio Tanaka (AIST) Phil Papadopoulos (SDSC)
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
A Personal Cloud Controller Yuan Luo School of Informatics and Computing, Indiana University Bloomington, USA PRAGMA 26 Workshop.
Youngil Kim Awalin Sopan Sonia Ng Zeng.  Introduction  Concept of the Project  System architecture  Implementation – HDFS  Implementation – System.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
SAN DIEGO SUPERCOMPUTER CENTER Inca Control Infrastructure Shava Smallen Inca Workshop September 4, 2008.
Pilot Factory using Schedd Glidein Barnett Chiu BNL
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Condor Services for the Global Grid: Interoperability between OGSA and Condor Clovis Chapman 1, Paul Wilson 2, Todd Tannenbaum 3, Matthew Farrellee 3,
Experiences Running Seismic Hazard Workflows Scott Callaghan Southern California Earthquake Center University of Southern California SC13 Workflow BoF.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Matthew Farrellee Computer Sciences Department University of Wisconsin-Madison Condor and Web Services.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
Virtual Cluster Computing in IHEPCloud Haibo Li, Yaodong Cheng, Jingyan Shi, Tao Cui Computer Center, IHEP HEPIX Spring 2016.
Information Initiative Center, Hokkaido University North 11, West 5, Sapporo , Japan Tel, Fax: General.
By: Joel Dominic and Carroll Wongchote 4/18/2012.
Leveraging Database Technologies in Condor Jeff Naughton April 25, 2006.
UCS D OSG School 11 Grids vs Clouds OSG Summer School Comparing Grids to Clouds by Igor Sfiligoi University of California San Diego.
HTCondor Networking Concepts
Elastic Computing Resource Management Based on HTCondor
HTCondor Networking Concepts
Quick Architecture Overview INFN HTCondor Workshop Oct 2016
Dynamic Deployment of VO Specific Condor Scheduler using GT4
Brad Sutton Assoc Prof, Bioengineering Department
Outline Expand via Flocking Grid Universe in HTCondor ("Condor-G")
Provisioning 160,000 cores with HEPCloud at SC17
High Availability in HTCondor
Virtualization in the gLite Grid Middleware software process
The Scheduling Strategy and Experience of IHEP HTCondor Cluster
Basic Grid Projects – Condor (Part I)
Genre1: Condor Grid: CSECCR
Condor-G Making Condor Grid Enabled
Job Submission Via File Transfer
Condor-G: An Update.
Presentation transcript:

Personal Cloud Controller (PCC) Yuan Luo 1, Shava Smallen 2, Beth Plale 1, Philip Papadopoulos 2 1 School of Informatics and Computing, Indiana University Bloomington 2 San Diego Supercomputer Center, University of California San Diego

Overview Goals: – Enable lab/group to easily manage application virtual clusters on available resources – Leverage PRAGMA Cloud tools: pragma_bootstrap, IPOP, ViNE. – Lightweight, extends HTCondor from U Wisc. – Provide command-line and Web interfaces Working Group: Resources

The PRAGMA Cloud Cluster A Cluster B Cluster C Cluster D Allocated Resource Unclaimed Resource Physical Network Virtual Network PCC-HTCondor Master Provenance Collecting Path Cluster Master Node PCC Enabled PRAGMA Cloud

Negotiator Collector Startd Schedd Central Manager Startd Schedd Shadow Machine 1 (submit) Startd Schedd Starter Machine N (execute) VM GAHP PRAGMA Cloud tools (pragma_boot) PRAGMA Cloud tools (pragma_boot) Communication Path Process Invoke … PCC-HTCondor Architecture

PCC-HTCondor Job Submission universe = vm executable = lifemapper log = simple.condor.log vm_type = rocks rocks_job_dir = /path/to/the/job/dir queue executable = pragma_boot basepath = /opt/pragma_boot/vm-images key = ~/.ssh/id_rsa.pub num_cores = 2 vcname = lifemapper logfile = pragma_boot.log.vmconf file in the rocks job directory Sample PCC-HTCondor submission script

Status and Future Plans Longer-term goals – Data-aware scheduling – Fault tolerance – Provenance Initial prototype implemented – Start and monitor virtual cluster using pragma_bootstrap via HTCondor (VM GAHP) – Web interface prototype (PHP) Near-term goals – Add increased controllability and robustness (April – June) – Multi-site clusters (July – Sept) Personal Cloud Controller Rocks PCC-HTCondor OpenNebula… Web Interface PRAGMA tools (pragma_boot, ViNE, iPOP)

PCC Demo Overview and Setup 1.View PCC Web interface a.Fully launched “lifemapper” 8-core virtual cluster b.Just launched “dock6” 4- core virtual cluster 2.View Condor pieces a.Submit scripts b.condor_status c.condor_q nbcr-224.ucsd.edu 4 x Dell PowerEdge SC x Dual-Core 2.4 GHz AMD Opteron 8 GB Memory 250 GB Disk Rocks 6.1 with KVM roll Condor Pragma_bootstrap + 3 public IPs PCC + web frontend vm-container-0-0 vm-container-0-1 vm-container-0-2 nbcr-224.ucsd.edu

Show web interface and ability to view running virtual clusters

Show launch interface

Show launching virtual cluster

Show running virtual cluster

Show condor status

Show submit files