Computational Research in the Battelle Center for Mathmatical medicine.

Slides:



Advertisements
Similar presentations
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Advertisements

Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
Faster Than Alter – Less Downtime Chris Schneider.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Dawei Lin, Ph.D. Director, Bioinformatics Core UC Davis Genome Center July 20, 2008, SLIMS (Solexa sequencing.
Shimin Chen Big Data Reading Group.  Energy efficiency of: ◦ Single-machine instance of DBMS ◦ Standard server-grade hardware components ◦ A wide spectrum.
MSSG: A Framework for Massive-Scale Semantic Graphs Timothy D. R. Hartley, Umit Catalyurek, Fusun Ozguner, Andy Yoo, Scott Kohn, Keith Henderson Dept.
Outline Introduction Image Registration High Performance Computing Desired Testing Methodology Reviewed Registration Methods Preliminary Results Future.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
High-Performance Task Distribution for Volunteer Computing Rom Walton
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Enterprise Storage Our Journey Thus Far John D. Halamka MD CIO, Harvard Medical School and Beth Israel Deaconess Medical Center.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Hadoop Team: Role of Hadoop in the IDEAL Project ●Jose Cadena ●Chengyuan Wen ●Mengsu Chen CS5604 Spring 2015 Instructor: Dr. Edward Fox.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
HPC at IISER Pune Neet Deo System Administrator
Technology Expectations in an Aeros Environment October 15, 2014.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Cluster Computing Applications for Bioinformatics Thurs., Aug. 9, 2007 Introduction to cluster computing Working with Linux operating systems Overview.
1 T.C. TURKISH STATE METEOROLOGİCAL SERVICE DEPARTMENT OF RESEARCH AND INFORMATION TECHNOLOGIES METEOROLOGICAL DATA MANAGEMENT Mustafa Sert October 2011.
AUTHORS: STIJN POLFLIET ET. AL. BY: ALI NIKRAVESH Studying Hardware and Software Trade-Offs for a Real-Life Web 2.0 Workload.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Presented by: Mostafa Magdi. Contents Introduction. Cloud Computing Definition. Cloud Computing Characteristics. Cloud Computing Key features. Cost Virtualization.
Sandor Acs 05/07/
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
CSE 451: Operating Systems Spring 2013 Module 26 Cloud Computing Ed Lazowska Allen Center 570 © 2013 Gribble, Lazowska, Levy,
Wellcome Trust Sanger Institute Informatics Systems Group Ensembl Compute Grid issues James Cuff Informatics Systems Group Wellcome Trust Sanger Institute.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
CSE 451: Operating Systems Autumn 2010 Module 25 Cloud Computing Ed Lazowska Allen Center 570.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Cloud Computing project NSYSU Sec. 1 Demo. NSYSU EE IT_LAB2 Outline  Our system’s architecture  Flow chart of the hadoop’s job(web crawler) working.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Interactive Terascale Particle Visualization Ellsworth, Green, Moran (NASA Ames Research Center)
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Presenter: Yue Zhu, Linghan Zhang A Novel Approach to Improving the Efficiency of Storing and Accessing Small Files on Hadoop: a Case Study by PowerPoint.
Brief introduction about “Grid at LNS”
The Beijing Tier 2: status and plans
Cluster / Grid Status Update
LCG 3D Distributed Deployment of Databases
UTFSM computer cluster
HYCOM CONSORTIUM Data and Product Servers
ALICE-Grid Activities in Bologna
SAP HANA Cost-optimized Hardware for Non-Production
Presentation transcript:

Computational Research in the Battelle Center for Mathmatical medicine

Statistical Genetics Research Develop statistical methodologies Localize and characterize human disease genes –Autism, CLP, AITD, Schizophrenia Our approach involves millions of likelihood calculations at each genomic position Each multi-parameter likelihood could be slow

Polynomial approach Represent likelihoods as algebraic polynomials Built once/family/position Evaluated millions of times Challenges –High memory demand –Time consuming

Computing Infrastructure 65 node Cluster w/ 4TB storage –Head Node (AMD Opteron 2xDuo Core 2.4 GHz, 16GB RAM) –16 Compute Nodes (AMD Opteron 2xDuo 2.4GHz, 8GB RAM) –16 Compute Nodes (AMD Opteron 2xDuo 2.4 GHz, 16GB RAM) –32 Compute Nodes (AMD Opteron 2xDuo 2.8 GHz, 16GB RAM)

Database Servers Master server –2 x 3.0 GHz Intel Xeon Quad Core –16GB RAM –300GB RAID 1 for OS –900GB RAID 5 for data Slave server –2 x 2.0 GHz Intel Xeon Dual Core –16GB RAM –300GB RAID 1 for OS –900GB RAID 5 for data

Data Management MySQL master/slave setup Client MasterSlave replicate input Webserver output

Data Management Manage/store genetic data –Million SNPs per individual Data upload to OSC –~ 1TB a week –~ 30 TB total Current live disk is full Move data to tape library –View content (just file and folder names) – start retrieve jobs on our own

Desire Summary Access to nodes with sufficient memory Access to massive storage Great collaborations