Weekly Work Dates:2010 8/20~8/25 Subject:Condor C.Y Hsieh.

Slides:



Advertisements
Similar presentations
Condor Project Computer Sciences Department University of Wisconsin-Madison Installing and Using Condor.
Advertisements

1 Concepts of Condor and Condor-G Guy Warner. 2 Harvesting CPU time Teaching labs. + Researchers Often-idle processors!! Analyses constrained by CPU time!
Dealing with real resources Wednesday Afternoon, 3:00 pm Derek Weitzel OSG Campus Grids University of Nebraska.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Challenges in Executing Large Parameter Sweep Studies across Widely Distributed Computing Environments Edward Walker Chona S. Guiang.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
Condor DAGMan Warren Smith. 12/11/2009 TeraGrid Science Gateways Telecon2 Basics Condor provides workflow support with DAGMan Directed Acyclic Graph –Each.
Dr. David Wallom Use of Condor in our Campus Grid and the University September 2004.
INFSO-RI Service Oriented Architectures - Introduction. Managing Grid Resources with Condor Web Services Corina.
1 Workshop 20: Teaching a Hands-on Undergraduate Grid Computing Course SIGCSE The 41st ACM Technical Symposium on Computer Science Education Friday.
Farming with Condor Douglas Thain INFN Bologna, December 2001.
Condor Project Computer Sciences Department University of Wisconsin-Madison A Scientist’s Introduction.
Distributed Computing Overviews. Agenda What is distributed computing Why distributed computing Common Architecture Best Practice Case study –Condor –Hadoop.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Introduction to Condor DMD/DFS J.Knudstrup December 2005.
Utilizing Condor and HTC to address archiving online courses at Clemson on a weekly basis Sam Hoover 1 Project Blackbird Computing,
Cheap cycles from the desktop to the dedicated cluster: combining opportunistic and dedicated scheduling with Condor Derek Wright Computer Sciences Department.
COMP3019 Coursework: Introduction to M-grid Steve Crouch School of Electronics and Computer Science.
An Introduction to High-Throughput Computing Rob Quick OSG Operations Officer Indiana University Content Contributed by the University of Wisconsin Condor.
Alain Roy Computer Sciences Department University of Wisconsin-Madison An Introduction To Condor International.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Parallel Computing The Bad News –Hardware is not getting faster fast enough –Too many architectures –Existing architectures are too specific –Programs.
April Open Science Grid Building a Campus Grid Mats Rynge – Renaissance Computing Institute University of North Carolina, Chapel.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
April Open Science Grid Campus Condor Pools Mats Rynge – Renaissance Computing Institute University of North Carolina, Chapel Hill.
An Introduction to High-Throughput Computing Rob Quick OSG Operations Officer Indiana University Some Content Contributed by the University of Wisconsin.
An Introduction to High-Throughput Computing Monday morning, 9:15am Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
The Glidein Service Gideon Juve What are glideins? A technique for creating temporary, user- controlled Condor pools using resources from.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Administration.
Installing and Managing a Large Condor Pool Derek Wright Computer Sciences Department University of Wisconsin-Madison
Ian C. Smith Towards a greener Condor pool: adapting Condor for use with energy-efficient PCs.
Peter Keller Computer Sciences Department University of Wisconsin-Madison Quill Tutorial Condor Week.
Grid Computing I CONDOR.
GridShell + Condor How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner Edward Walker Miron Livney Todd Tannenbaum The Condor Development Team.
Experiences with a HTCondor pool: Prepare to be underwhelmed C. J. Lingwood, Lancaster University CCB (The Condor Connection Broker) – Dan Bradley
Condor Project Computer Sciences Department University of Wisconsin-Madison A Scientist’s Introduction.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
Dealing with real resources Wednesday Afternoon, 3:00 pm Derek Weitzel OSG Campus Grids University of Nebraska.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Quill / Quill++ Tutorial.
Condor Tutorial for Administrators INFN-Bologna, 6/28/99 Derek Wright Computer Sciences Department University of Wisconsin-Madison
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
Pilot Factory using Schedd Glidein Barnett Chiu BNL
Condor week – March 2005©Gabriel Kliot, Technion1 Adding High Availability to Condor Central Manager Gabi Kliot Technion – Israel Institute of Technology.
Ian D. Alderman Computer Sciences Department University of Wisconsin-Madison Condor Week 2008 End-to-end.
An Introduction to High-Throughput Computing With Condor Tuesday morning, 9am Zach Miller University of Wisconsin-Madison.
Nicholas Coleman Computer Sciences Department University of Wisconsin-Madison Distributed Policy Management.
Dan Bradley Condor Project CS and Physics Departments University of Wisconsin-Madison CCB The Condor Connection Broker.
HTCondor Private Cloud Integration Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
An Introduction to High-Throughput Computing Monday morning, 9:15am Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
Condor Tutorial NCSA Alliance ‘98 Presented by: The Condor Team University of Wisconsin-Madison
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
HTCondor Administration Basics John (TJ) Knoeller Center for High Throughput Computing.
SCI-BUS is supported by the FP7 Capacities Programme under contract nr RI CloudBroker usage Zoltán Farkas MTA SZTAKI LPDS
Quick Review. Job and Machine Policy Configuration HTCondor / ARC CE Workshop Barcelona 2016 Todd Tannenbaum.
Condor Introduction and Architecture for Vanilla Jobs CERN Feb
Job and Machine Policy Configuration
High Availability in HTCondor
Job Matching, Handling, and Other HTCondor Features
Condor Glidein: Condor Daemons On-The-Fly
Basic Grid Projects – Condor (Part I)
HTCondor Training Florentia Protopsalti IT-CM-IS 1/16/2019.
Douglas Thain INFN Bologna, December 2001
Condor: Firewall Mirroring
Using Condor An Introduction Paradyn/Condor Week 2002
Condor Administration
HTCondor Annex Elasticity with the Public Cloud
PU. Setting up parallel universe in your pool and when (not
Presentation transcript:

Weekly Work Dates:2010 8/20~8/25 Subject:Condor C.Y Hsieh

Introduction 1988, University of Wisconsin-Madison High Throughput Computing (HTC) High-Performance Computing(HPC),ex:MPI 。 High-Throughput Computing (HTC),ex:Condor 。 How to work?

Job1 Job2 Job3 Job4 :::::::::: Condor users upload jobs distribute CPU1 CPU2 CPU3 CPU4 :::::::::: Pool

The Architecture of Condor Step1Step2 Step3 Step4 Only one A. collector B. negotiator Any,more than one Submit Job Any,more than one Execute job

Setting Master: central manager submit machine execute machine. Node1 submit machine execute machine. Node2 submit machine execute machine. Master Central manager Submit machine Execute machine Node1 Submit machine Execute machine Node2 Submit machine Execute machine CPU1 CPU2

Install Condor #download file from websit to /etc/yum.repos.d cd /etc/yum.repos.d wge wget #install condor yum install condor # Edit Condor's configuration file vi /etc/condor/condor_config #Globle configuration vi /etc/condor/condor_config.local #Local configuration # Start Condor daemons service condor start

Name OpSys Arch State Activity LoadAv Mem ActvtyTime LINUX INTEL Unclaimed Idle :00:04 LINUX INTEL Unclaimed Idle :00:05 LINUX INTEL Unclaimed Idle :00:04 LINUX INTEL Unclaimed Idle :00:05 Total Owner Claimed Unclaimed Matched Preempting Backfill INTEL/LINUX Total # Check the number of CPU in the pool condor_status

#node condor :16 ? 00:00:00 /usr/sbin/condor_master -pidfile /var/run/condor/condor.pid condor :16 ? 00:00:00 condor_schedd -f condor :16 ? 00:00:04 condor_startd -f root :16 ? 00:00:00 condor_procd -A /var/run/condor/procd_pipe.SCHEDD -R S 60 -C 103 root :36 pts/2 00:00:00 grep condor # Check if Condor is running ps -ef | grep condor #master condor :12 ? 00:00:01 /usr/sbin/condor_master -pidfile /var/run/condor/condor.pid condor :12 ? 00:00:00 condor_collector -f condor :12 ? 00:00:01 condor_negotiator -f condor :12 ? 00:00:00 condor_schedd -f condor :12 ? 00:00:04 condor_startd -f root :12 ? 00:00:00 condor_procd -A /var/run/condor/procd_pipe.SCHEDD -R S 60 -C 103 root :32 pts/3 00:00:00 grep condor