GridSAT Portal: A Grid Portal for Solving Satisfiability Problems Wahid Chrabakh and Rich Wolski University of California, Santa Barbara.

Slides:



Advertisements
Similar presentations
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
11 Application of CSF4 in Avian Flu Grid: Meta-scheduler CSF4. Lab of Grid Computing and Network Security Jilin University, Changchun, China Hongliang.
A Proposal of Capacity and Performance Assured Storage in The PRAGMA Grid Testbed Yusuke Tanimura 1) Hidetaka Koie 1,2) Tomohiro Kudoh 1) Isao Kojima 1)
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Does the implementation give solutions for the requirements? Flexibility GridRPC enables dynamic join/leave of QM servers. GridRPC enables dynamic expansion.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
NGS computation services: API's,
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
A MapReduce Workflow System for Architecting Scientific Data Intensive Applications By Phuong Nguyen and Milton Halem phuong3 or 1.
Making Time-stepped Applications Tick in the Cloud Tao Zou, Guozhang Wang, Marcos Vaz Salles*, David Bindel, Alan Demers, Johannes Gehrke, Walker White.
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
NSF Site Visit HYDRA Using Windows Desktop Systems in Distributed Parallel Computing.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
ATIF MEHMOOD MALIK KASHIF SIDDIQUE Improving dependability of Cloud Computing with Fault Tolerance and High Availability.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
STRATEGIES INVOLVED IN REMOTE COMPUTATION
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Self Adaptivity in Grid Computing Reporter : Po - Jen Lo Sathish S. Vadhiyar and Jack J. Dongarra.
Using Grid Computing in Parallel Electronic Circuit Simulation Marko Dimitrijević FACULTY OF ELECTRONIC ENGINEERING, UNIVERSITY OF NIŠ LABORATORY FOR ELECTRONIC.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Large-scale Hybrid Parallel SAT Solving Nishant Totla, Aditya Devarakonda, Sanjit Seshia.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Young Suk Moon Chair: Dr. Hans-Peter Bischof Reader: Dr. Gregor von Laszewski Observer: Dr. Minseok Kwon 1.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
SSS Test Results Scalability, Durability, Anomalies Todd Kordenbrock Technology Consultant Scalable Computing Division Sandia is a multiprogram.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Predicting Queue Waiting Time in Batch Controlled Systems Rich Wolski, Dan Nurmi, John Brevik, Graziano Obertelli Computer Science Department University.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
1 Media Grid Initiative By A/Prof. Bu-Sung Lee, Francis Nanyang Technological University.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
George Goulas, Christos Gogos, Panayiotis Alefragis, Efthymios Housos Computer Systems Laboratory, Electrical & Computer Engineering Dept., University.
Automatic Statistical Evaluation of Resources for Condor Daniel Nurmi, John Brevik, Rich Wolski University of California, Santa Barbara.
SAN DIEGO SUPERCOMPUTER CENTER Inca Control Infrastructure Shava Smallen Inca Workshop September 4, 2008.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Structural Biology on the GRID Dr. Tsjerk A. Wassenaar Biomolecular NMR - Utrecht University (NL)
7. Grid Computing Systems and Resource Management
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Euro-Par, HASTE: An Adaptive Middleware for Supporting Time-Critical Event Handling in Distributed Environments ICAC 2008 Conference June 2 nd,
Configuring SQL Server for a successful SharePoint Server Deployment Haaron Gonzalez Solution Architect & Consultant Microsoft MVP SharePoint Server
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
Resource Characterization Rich Wolski, Dan Nurmi, and John Brevik Computer Science Department University of California, Santa Barbara VGrADS Site Visit.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
University of Technology
Clouds , Grids and Clusters
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
Resource Characterization
CLUSTER COMPUTING Presented By, Navaneeth.C.Mouly 1AY05IS037
CSC 480 Software Engineering
Class project by Piyush Ranjan Satapathy & Van Lepham
Simple Circuit-Based SAT Solver
Department of Computer Science University of California, Santa Barbara
Operating Systems Bina Ramamurthy CSE421 11/27/2018 B.Ramamurthy.
Unit 1: Introduction to Operating System
Wide Area Workload Management Work Package DATAGRID project
Department of Computer Science University of California, Santa Barbara
Presentation transcript:

GridSAT Portal: A Grid Portal for Solving Satisfiability Problems Wahid Chrabakh and Rich Wolski University of California, Santa Barbara

Challenging Scientific Problems u Computationally demanding – Large compute power – Extended Periods of time u Infrastructure: – Desktops, Clusters, Supercomputers u Common Resource Usage: – Most suitable for co-located nodes – Determine number of nodes to use – Use all nodes until termination criteria reached

Satisfiability u Example of dynamic resource use u Application Characteristics: – Branch-and-bound – Unpredictable runtime behavior – Memory Intensive: Internal database grows overwhelming RAM – CPU intensive: 100% CPU load

Satisfiability Problem(SAT) u Set of variables V={v i | i=1,…,k} u Literal: a variable or its complement u Problems in CNF form: community standard u Clause: OR of a set of literals u Conjunctive Normal Form: F=C 1 C 2 C 3 … C k u Standard File format: p cnf num_vars num_clauses c comments +v1 –v2 … +v213 0

Satisfiability Applications u Circuit Design u FPGA routing u Model Checking: – AI, software u Security u Scheduling u Theoretical: – physics, chemistry, combinatorics u Many More…

SAT Community u Communities: – SATLive: News, forums, links, documents – SATEx: Experimentation and execution system – SATLIB: Dynamic set of Benchmarks Freely available solvers

Who uses SAT Live! u Period: Sep Jan 2003 – 21,766 distinct hosts u Jan : 524 distinct hosts u SATLIB: 250 hits/month

SAT Competition u u 55 Sequential Solvers: circus, circush0, cls, compsat, eqube2, forklift, funex, gasat, isat1, tts-2-0, unitwalk, walksatauto, walksatmp, walksatskc, werewolf, wllsatv1, zchaff, zchaff_rand circuscircush0clscompsateqube2 forkliftfunexgasatisat1tts-2-0unitwalkwalksatautowalksatmp walksatskcwerewolfwllsatv1zchaffzchaff_rand u Execution uses SAT-Ex u Two rounds: – First round: easy problems – Second round: harder problems u Awards to category leaders for SAT, UNSAT and overall u Challenging set: some problems left unsolved

Benchmarks: u Community submitted benchmarks u Crafted Benchmark: (38 MB) – Especially made to give a hard time to the solver u Random Benchmark: (11 MB) u Industrial Benchmark: (2 GB) – REAL industrial instances from all domains

GridSAT: The Solver u Parallel distributed SAT solver based on GridSAT u Based on zChaff leading sequential Solver u GridSAT beats zChaff on problems that zChaff can solve u GridSAT Solves problems which were not previously solved

GridSAT: Grid Aware u Highly Portable Components u Uses resources simultaneously: – Single nodes, Clusters, SuperComputers – Resources may leave and join at any time u Fault-tolerant: – Error detection & checkpointing – All resources can/do fail: – Even reliable resources: Maintenance & upgrade periods u Reactive to Resource Composition and Load: Migration

How to make GridSAT available to users? u Deploy GridSAT locally by interested users – Complex – Not enough computational resources u Feedback from SAT experts: – Make it available through a portal – Simple interface: minimal user input u GridSAT Portal: orca.cs.ucsb.edu/sat_portalorca.cs.ucsb.edu/sat_portal u Test problems: orca.cs.ucsb.edu/sat_portal/test_problems.htm orca.cs.ucsb.edu/sat_portal/test_problems.htm

Internal Design WebServer User DataStarTeraGridDesktop Machines GridSAT Coordinator

User accounts:

Problem Submission

List Problems

Detailed Report

Budget based Scheduling u CPU count or timeout may not be fulfilled – CPU count: too large – Time limit: too large or too small u Find closest job to user request u May need multiple jobs u Use Max CPUs * Timeout as a budget: – Debit from budget for every job

Conclusion u New science and engineering portal u GridSAT: Grid enabled application manages resources u Web Portal: – Launch coordinators – Provide feedback and Accounting u Challenge: – Provide compelling service to get community interested

Thanks u LRAC Allocation through NSF u TeraGrid: – SDSC, NCSA, PSC, TACC u DataStar at SDSC: also BlueHorizon u Mayhem Lab at UCSB

User Environment u Input: – Problem in standard CNF format – Max number of CPUs to use – Timeout period u Feedback: – Jobs: resource, status, submit, start and end times – Total number of active processors – CPU*hours consumed – Number of checkpoints – Final result: UNSAT or SAT instance

Programming Models u Synchronous Model: – Predictable Space: number of nodes, memory – Predictable Time: How long per instance – Synchronization Barrier – Fits Cluster Model (MPI) u Asynchronous Model: – Dynamic resource requirement, – Variable and Unpredictable duration – Asynchronous Events – Fits Computational Grid environment

Thanks u LRAC Allocation through NSF u TeraGrid: – SDSC, NCSA, PSC, TACC u DataStar at SDSC: also BlueHorizon u Mayhem Lab at UCSB