Presentation is loading. Please wait.

Presentation is loading. Please wait.

Next Generation Genomics: Petascale data in the life sciences Guy Coates Wellcome Trust Sanger Institute

Similar presentations

Presentation on theme: "Next Generation Genomics: Petascale data in the life sciences Guy Coates Wellcome Trust Sanger Institute"— Presentation transcript:

1 Next Generation Genomics: Petascale data in the life sciences Guy Coates Wellcome Trust Sanger Institute

2 Outline DNA Sequencing and Informatics Managing Data Sharing Data Adventures in the Cloud

3 The Sanger Institute Funded by Wellcome Trust. 2 nd largest research charity in the world. ~700 employees. Based in Hinxton Genome Campus, Cambridge, UK. Large scale genomic research. Sequenced 1/3 of the human genome. (largest single contributor). We have active cancer, malaria, pathogen and genomic variation / human health studies. All data is made publicly available. Websites, ftp, direct database. access, programmatic APIs.

4 DNA sequencing

5 Next-generation Sequencing Life sciences is drowning in data from our new sequencing machines. Traditional sequencing: 96 sequencing reactions carried out per run. Next-generation: sequencing. 52 Million reactions per run. Machines are cheap(ish) and small. Small labs can afford one. Big labs can afford lots of them.

6 Economic Trends: As cost of sequencing halves every 12 months. cf Moore's Law The Human genome project: 13 years. 23 labs. $500 Million. A Human genome today: 3 days. 1 machine. $10,000. Large centres are now doing studies with 1000s and 10,000s of genomes. Changes in sequencing technology are going to continue this trend. Next-next generation sequencers are on their way. $500 genome is probable within 5 years.

7 Output Trends Our peak old generation sequencing: August 2007: 3.5 Gbases/month. Current output: Jan 2010: 4 Tbases/month. 1000x increase in our sequencing output. In August 2007, total size of genbank was 200 Gbases. Improvements in chemistry continue to increase the output of machines.

8 The scary graph Instrument upgrades Peak Yearly capillary sequencing

9 Managing Growth We have exponential growth in storage and compute. Storage /compute doubles every 12 months ~7 PB raw Gigabase of sequence Gigbyte of storage. 16 bytes per base for for sequence data. Intermediate analysis typically need 10x disk space of the raw data. Moore's law will not save us. Transistor/disk density: T d =18 months Sequencing cost:T d =12 months

10 Sequencing Informatics


12 Alignment Find the best match of fragments to a known genome / genomes. grep for DNA sequences. Use more sophisticated algorithms that can do fuzzy matching. Real DNA has Insertions, deletions and mutations. Typical algorithms are maq, bwa, ssaha, blast. Look for differences Single base pair differences (SNP). Larger insertions/deletions/mutations. Typical experiment: Compare cancer cell genomes with healthy ones. Reference:...TTTGCTGAAACCCAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCTCGGTCATCACCAGCATTCTC.... Query: CAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCTAGGTCATCACCAGCA

13 Assembly Assemble fragments into a complete genome. Typical experiment: collect reference genome for a new species. De-novo assembly. Assemble fragment with no external data. Harder than it looks. Non uniform coverage, low depth, non-unique sequence (repeats). Alignment based assembly. Align fragments to a related genome. Starting scaffold which can then be refined. Eg H. neanderthal. is being assembled against a H. sapiens sequence.

14 Cancer Genomes Cancer is a disease caused by abnormalities in a cell's genome.

15 Mutation Details Lung Carcenoma genome Nature ; ,910 mutations 58 rearrangements 334 copy number segments

16 Analysing Cancer Genomes Cancer genomes contains a lot of genetic damage. Many of the mutations in cancer are incidental. Initial mutation disrupts the normal DNA repair/replication processes. Corruption spreads through the rest of the genome. Today: Find the driver mutations amongst the thousands of passengers. Identifying the driver mutations will give us new targets for therapies. Tomorrow: Analyse the cancer genome of every patient in the clinic. Variations in a patient and cancer genetic makeup play a major role in how effective a particular drugs will be. Clinicians will use this information to tailor therapies.

17 International Cancer Genome Project Many cancer mutations are rare. Low signal-to-noise ratio. How do we find the rare but important mutations? Sequence lots of cancer genomes. International Cancer Genome Project. Consortia of sequencing and cancer research centres in 10 countries. Aim of the consortia. Complete genomic analysis of 50 different tumor types. (50,000 genomes).

18 Past Collaborations Data Sequencing Centre + DCC Sequencing centre Sequencing centre Sequencing centre Sequencing centre

19 Future Collaborations Sequencing centre Sequencing centre Sequencing centre Sequencing centre Federated access Collaborations are short term: 18 months-3 years.

20 Genomics Data Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB) Structured data (databases) Unstructured data (flat files) Data size per Genome Clinical Researchers, non-infomaticians Sequencing informatics specialists

21 Where can grid technologies help us? Managing data. Sharing data. Making our software resources available.

22 Managing Data

23 Bulk Data Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB) Structured data (databases) Unstructured data (flat files) Data size per Genome Sequencing informatics specialists

24 Bulk Data Management We though we were really good at it. All samples that come through the sequencing lab are bar-coded and tracked (Laboratory Information Systems). Sequencing machines fed into an automated analysis pipeline. All the data was tracked, analysed and archived appropriately. Strict meta-data controls. Experiments do not start in the wet-lab until the investigator has supplied all the required data privacy and archiving requirements. Anonymised data straight into the archive. Identifiable data private/controlled archives. Some data held back until journal publication.

25 Compute farm analysis/QC pipeline Alignment/assembly suckers Data pull... Final Repository (Oracle) 100TB / yr staging area 500 TB Seq 1Seq 38

26 It turn out we were looking in the wrong place We had been focused on the sequencing pipeline. For many investigators, data coming off the end of the sequencing pipeline is where they start. Investigators take the mass of finished sequence data out of the archives, onto our compute farms and do stuff. Huge explosion of data and disk use all over the institute. We had no idea what people were doing with their data.

27 ... Compute Farm Compute farm disk Collaberators / 3 rd party sequencing Unmanged Compute farm analysis/QC pipeline assembly/alignment suckers Data pull... Final Repository (Oracle) 100TB / yr staging area 500TB Seq 1Seq 38 ? LIMS managed data

28 Accidents waiting to happen... (who left 12 months ago) From: (who left 12 months ago) (who left 6 months ago) I find the directory is removed. The original directory is "/scratch/ (who left 6 months ago)"..where is it ? If this problem cannot be solved,I am afriaid that cannot be released.

29 An idea whose time had come Forward thinking groups had hacked up a file tracking systems for their unstructured data. They could not keep track of where the results. Problem exacerbated with student turnover (summer students, PhD students on rotation). Big wins with little effort. Disk space usage dropped by 2/3. Lots of individuals keeping copies of the same data set so I know where it is. Team leaders are happy that their data is where they thing it is. Important stuff is on filesystems that are backed up etc. But: Systems are ad-hoc, quick hacks. We want an institute wide, standardised system. Invest in people to maintain/develop it.

30 iRODS iRODS: Integrated Rule-Oriented Data System. Produced by DICE (Data Intensive Cyber Environments) groups at U. North Carolina, Chapel Hill. Successor to SRB.

31 iRODS ICAT Catalogue database Rule Engine Implements policies Irods Server Data on disk User interface WebDAV, icommands,fuse Irods Server Data in database

32 Basic Features Catalogue: Put data on disk and keeps a record of where it it. Add query-able metadata to files. Rules engine. Do things to files based on file data and metadata. Eg move data between fast/archival storage. Implement policies. Experiment A data should be publicly viewable, but experiment B is restricted to certain users until 6 months after deposition. Efficient. Copes with PB of data and 100,000M+ files. Fast parallel data transfers across local and wide area network links.

33 Advanced Features Extensible Link the system out to external services. Eg external databases holding metadata, external authentication systems. Federated Physically and logically separated iRODS installs can be federated. Allows user at institute A to seamlessly access data at institute B in a controlled manner. Supports replication. Useful for disaster recovery/backup scenarios. Policy enforcements Enforces data sharing / data privacy rules.

34 What are we doing with it? Piloting it for internal use. Help groups keep track of their data. Move files between different storage pools. Fast scratch space warehouse disk Offsite DR centre. Link metadata back to our LIMs/tracking databases. We need to share data with other institutions. Public data is easy: FTP/http. Controlled data is hard: Encrypt files and place on private FTP dropboxes. Cumbersome to manage and insecure. Proof of concept to use iRODS to provide controlled access to datasets. Will we get buy in for the community?

35 Sharing data

36 Structured Data Intensities / raw data (2TB) Alignments (200 GB) Sequence + quality data (500 GB) Variation data (1GB) Individual features (3MB) Structured data (databases) Unstructured data (flat files) Data size per Genome Clinical Researchers, non-infomaticians


38 Ensembl Ensembl is a system for genome Annotation. Compute Pipeline. Take a raw genome and run it through a compute pipeline to find genes and other features of interest. Ensembl at Sanger/EBI provides automated analysis for 51 vertebrate genomes. Data visualisation. Provides web interface to genomic data. 10k visitors / 126k page views per day. Data access and mining. OO Perl / Java APIs. Direct SQL access. Bulk data download. BioMart, DAS Software is Open Source (apache license). Data is free for download.

39 Example annotation



42 Sharing data with Web Services

43 Distributed Annotation Service Labs may have data that they want to view with Ensembl. Put data into context with everything else. DAS is a web-services protocol that allows sharing of annotation information. Developed at Cold Spring Harbor Lab and extended by Sanger Institute and others. DAS Information; metadata: Description of the dataset, features supported. This can be optionally registered/validated at Data: Object type. Co-ordinates (typically genome species/version and position). Stylesheet; (how should the data be displayed, eg histogram, color gradient).

44 DAS community Currently ~600 DAS providers spread across 45 institutions and 18 counties. Removal of non- responsive services




48 BioMART Provides query based access to structured data. Collaboration between CSHL, European Bioinformatics Institute and Ontario Institute for Cancer Research. Tell me the function of genes that have substitution mutations in breast-cancer samples. Query requires queries across multiple databases. Mutations are stored in COSMIC, Cancer Genome database. Gene function is stored in Ensembl. BioMart provides a unified entry point to these databases.

49 BioMART Oracle CSV Mysql MART XML GUI PERL SOAP/REST JAVA Transform / Import Query Common IDs: federatable Common IDs: federatable






55 Clouds

56 Disclaimer This talk will use Amazon/EC2. We tested it. It is not a commercial endorsement. Other cloud providers exist. It a short hand; feel free to insert your favourite cloud provider instead.

57 Cloud-ifying Ensembl Website LAMP stack. Ports easily to Amazon. Provides virtual world-wide co-lo. Compute Pipeline HPTC workload Compute pipeline is a harder problem.

58 Expanding markets There are going to be lots of new genomes that need annotating. Sequencers moving into small labs, clinical settings. Limited informatics / systems experience. Typically postdocs/PhD who have a real job to do. We have already done all the hard work on installing the software and tuning it. Can we package up the pipeline, put it in the cloud? Goal: End user should simply be able to upload their data, insert their credit-card number, and press GO.

59 Gene Finding DNA HMM Prediction Alignment with known proteins Alignment with fragments recovered in vivo Alignment with other genes and other species

60 Compute Pipeline Architecture: OO perl pipeline manager. Core algorithms are C. 200 auxiliary binaries. Workflow: Investigator describes analysis at high level. Pipeline manager splits the analysis into parallel chunks. Typically 50k-100k jobs. Sorts out the dependences and then submits jobs to a DRM. Typically LSF or SGE. Pipeline state and results are stored in a mysql database. Workflow is embarrassingly parallel. Integer, not floating point. 64 bit memory address is nice, but not required. 64 bit file access is required. Single threaded jobs. Very IO intensive.

61 Running the pipeline in practice Requires a significant amount of domain knowledge. Software install is complicated. Lots of perl modules and dependencies. Apache wranging if you want to run a website. Need a well tuned compute cluster. Pipeline takes ~500 CPU days for a moderate genome. Ensembl chewed up 160k CPU days last year. Code is IO bound in a number of places. Typically need a high performance filesystem. Lustre, GPFS, Isilon, Ibrix etc. Need large mysql database. 100GB-TB mysql instances, very high query load generated from the cluster.

62 How does this port to cloud environments? Creating the software stack / machine image. Creating images with software is reasonably straightforward. Getting queuing system etc running requires jumping through some hoops. Mysql databases Lots of best practice on how to do that on EC2. But it took time, even for experienced systems people. (You will not be firing your system-administrators just yet!).

63 Moving data is hard Moving large amounts of data across the public internet is hard. Commonly used tools are not suited to wide-area networks. There is a reason gridFTP/FDT/Aspera exist. Data transfer rates (gridFTP/FDT): Cambridge EC2 East coast: 12 Mbytes/s (96 Mbits/s) Cambridge EC2 Dublin: 25 Mbytes/s (200 Mbits/s) 11 hours to move 1TB to Dublin. 23 hours to move 1 TB to East coast. What speed should we get? Once we leave JANET (UK academic network) finding out what the connectivity is and what we should expect is almost impossible.

64 IO Architecture CPU Fat Network Posix Global filesystem CPU thin network Local storage Local storage Local storage Local storage Batch schedularhadoop/S3 VS

65 Storage / IO is hard No viable global filesystems on EC2. NFS has poor scaling at the best of times. EC2 has poor inter-node networking. > 8 NFS clients, everything stops. The cloud way: store data in S3. Web based object store. Get, put, delete objects. Not POSIX. Code needs re-writing / forking. Limitations; cannot store objects > 5GB. Nasty-hacks: Subcloud; commercial product that allows you to run a POSIX filesystem on top of S3. Interesting performance, and you are paying by the hour...

66 Going forward

67 Cloud vs HPTC Re-writing apps to use S3 or hadoop/HDFS is a real hurdle. Not an issue for new apps. But new apps do not exist in isolation. Barrier for entry is much lower for file-systems. Am I being a reactionary old fart? 15 years ago clusters of PCs were not real supercomputers....then beowulf took over the world. Big difference: porting applications between the two architectures was easy. MPI/PVM etc. Will the market provide traditional compute clusters in the cloud?

68 Networking How do we improve data transfers across the public internet? CERN approach; don't. Dedicated networking has been put in between CERN and the T1 centres who get all of the CERN data. Our collaborations are different. We have relatively short lived and fluid collaborations. (1-2 years, many institutions). As more labs get sequencers, our potential collaborators also increase. We need good connectivity to everywhere.

69 Can we turn the problem on its head? Fixing the internet is not going to be cost effective for us. Amazon fixing the internet may be cost effective for them. Core to their business model. All we need to do is get data into Amazon, and then everyone else can get the data from there. Cloud as virtual co-location site. Mass datastores. Host mirror sites for our web services. Requires us to invest in a fast links to Amazon. It changes the business dynamic. We have effectively tied ourselves to a single provider. Expensive mistake if you change your mind, or your provider goes.

70 Identity management Web services for linking databases together are mature. They are currently all public. There will be demand for restricted services. Patient identifiable data. Our next big challenge. Lots of solutions: openID, shibboleth, aspis, globus etc. Finding consensus will be hard. Culture shock.

71 Acknowledgements Sanger Institute Phil Butcher ISG James Beal Gen-Tao Chiang Pete Clapham Simon Kelley Cancer-genome Project Adam Butler John Teague STFC David Corney Jens Jensen

72 Sites of interest

Download ppt "Next Generation Genomics: Petascale data in the life sciences Guy Coates Wellcome Trust Sanger Institute"

Similar presentations

Ads by Google