Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBC MIBBI [www.mibbi.org] HUPO Proteomics.

Similar presentations


Presentation on theme: "Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBC MIBBI [www.mibbi.org] HUPO Proteomics."— Presentation transcript:

1 Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBC chris.taylor@ebi.ac.uk MIBBI [www.mibbi.org] HUPO Proteomics Standards Initiative [psidev.sf.net] Research Information Network [www.rin.ac.uk]

2 Mechanisms of scientific advance

3 Well-oiled cogs meshing perfectly (would be nice) How well are things working? —Cue the Tower of Babel analogy… —Situation is improving with respect to standards —But few tools, fewer carrots (though some sticks) Why do we care about that..? —Data exchange —Comprehensibility (/quality) of work —Scope for reuse (parallel or orthogonal) “Publicly-funded research data are a public good, produced in the public interest” “Publicly-funded research data should be openly available to the maximum extent possible.”

4  Methods remain properly associated with the results generated —Data sets generated with specific techniques/materials can be retrieved from repositories (or excluded from results sets)  No need to repeatedly construct sets of contextualizing information —Facilitates the sharing of data with collaborators —Avoids the risk of loss of information through staff turnover —Enables time-efficient handover of projects  For industry specifically (in the light of 21 CFR Part 11) —The relevance of data can be assessed through summaries without wasting time wading through full data sets in diverse proprietary formats (‘business intelligence’) —Public data can be leveraged as ‘commercial intelligence’ 1. Increased efficiency

5  Enables fully-informed assessment of results (methods used etc.)  Supports the assessment of results that may have been generated months or even years ago (e.g. for referees or regulators)  Facilitates better-informed comparisons of data sets —Increased likelihood of discovering the factors (controlled and uncontrolled) that might differentiate those data sets  Supports the discovery of sources of systematic or random error by correlating errors with metadata features such as the date or the operator concerned  Requires sufficient information to support the design of appropriate parallel or orthogonal studies to confirm or refute a given result 2. Enhanced confidence in data

6  Re-using existing data sets for a purpose significantly different to that for which the data were generated  Building aggregate data sets containing (similar) data from different sources (including standards-compliant public repositories)  Integrating data from different domains —For example, correlating changes in mRNA abundances, protein turnover and metabolic fluxes in response to a stimulus  Design requirements become both explicit and stable — MIAPE modules as driving use cases (tools, formats, CV, DBs) — Promotes the development of sophisticated analysis algorithms — Presentation of information can be ‘tuned’ appropriately — Makes for a more uniform experience 3. Added value, tool development

7  Data sharing is more or less a given now, and tools are emerging —Lots of sticks, but they only get the bare minimum —How to get the best out of data generators? —Need standards- and user-friendly tools, and meaningful credit  Central registries of data sets that can record reuse —Well-presented, detailed papers get cited more frequently —The same principle should apply to data sets —ISNIs for people, DOIs for data: http://www.datacite.org/  Side-benefits, challenges —Would also clear up problems around paper authorship —Would enable other kinds of credit (training, curation, etc.) —Community policing — researchers ‘own’ their credit portfolio (enforcement body useful, more likely through review) —Problem of ‘micro data sets’ and legacy data Credit where credit’s due

8  Spanish multi-site collaboration: provision of proteomics services  MIAPE customer satisfaction survey (compiled November 2008) —http://www.proteored.org/MIAPE_Survey_Results_Nov08.html —Responses from 31 proteomics experts representing 17 labs ProteoRED’s MIAPE satisfaction survey Yes: 95% No: 5%

9 Is the generation of MIAPE compliant reports useful? Is useful as a results report for my customer to publish proteomics data and for my lab to have all the conditions employed in each analysis. For the customers is an easy and very fast way of having a general view of the experiment and is useful for comparison of data. For my lab is a good way of data compilation and is necessary for data publication The MIAPE compliant reports is useful for the customers because they have the complete information about their experiment and for my own purposes in my lab for the same reason. Are MIAPE compliant reports useful as a quality label for your customers? A MIAPE report is right now, by itself the best quality label possible. Yes, sure. The user has all the information necessary to reproduce their analysis of a robust manner. As for today, none of our customers had the need for MIAPE documentation.

10 So what (/why) is a standards body again..? Consider the three main ‘omics standards bodies’ — What defines a (candidate) standards-generating body? — “A beer and an airline” (Zappa) — Requirements, formats, vocabulary — Regular full-blown open attendance meetings, lists, etc. — PSI (proteomics), GSC (genomics), MGED (transcriptomics) Hugely dependent on their respective communities — Requirements (What are we doing and why are we doing it?) — Development (By the people, for the people. Mostly.) — Testing (No it isn’t finished, but yes I’d like you to use it…) — Uptake, by all of the various kinds of stakeholder: — Publishers, funders, vendors, tool/database developers — The user community (capture, store, search, analyse)

11 Domain specialists & IT types (initial drafts, evolution) Journals —The real issue for any MI project is getting enough people to comment on what you have (distinguishes a toy project from something to be taken seriously — community buy-in) —Having journals help in garnering reviews is great (editorials, web site links, mail shots even). Their motive of course being that fuller reporting = better content = higher citation index. Funders —MI projects can claim to be slightly outside of 'normal' science; may form funding policy components (arguments about maximum value) —Funders therefore have a motive (similar to journals) to ensure that MI guidelines, which they may endorse down the line, are representative and mature —They can help by allocating slots at (appropriate) meetings of their award holders for you to show your stuff. Things like that. Ingredients for MI pie

12 Vendors —The cost of MIs in person-hours will be the major objection —Vendors can implement parameter export to an appropriate file format, ideally using some helpful CV (somebody else's problems) —Vendors also have engineers (and some sales staff) who really know their kit and make for great contributors/reviewers. —For some standards bodies (like PSI, MGED) their sponsorship has been very helpful also (believe it or not it would seem possible to monetise a standards body). Food / pharma —Already used to better, if rarely perfect data capture and management; for example, 21 CFR Part 11 (MI = exec summary…) Trainers —There is a small army of individuals training scientists, especially in relation to IT (EBI does a lot of this but I mean commercial training providers)  ‘Resource packs’ Ingredients for MI pie

13 Technologically-delineated views of the world A: transcriptomics B: proteomics C: metabolomics …and… Biologically-delineated views of the world A: plant biology B: epidemiology C: microbiology …and… Generic features (‘common core’) — Description of source biomaterial — Experimental design components Arrays Scanning Arrays & Scanning Columns Gels MS MS FTIR NMR Columns Modelling the biosciences

14 Modelling the biosciences (slightly differently) Assay:Omics and miscellaneous techniques Investigation:Medical syndrome, environmental effect, etc. Study:Toxicology, environmental science, etc.

15 Reporting guidelines — a case in point  MIAME, MIAPE, MIAPA, MIACA, MIARE, MIFACE, MISFISHIE, MIGS, MIMIx, MIQAS, MIRIAM, (MIAFGE, MIAO), My Goodness…  ‘MI’ checklists usually developed independently, by groups working within particular biological or technological domains —Difficult to obtain an overview of the full range of checklists —Tracking the evolution of single checklists is non-trivial —Checklists are inevitably partially redundant one against another —Where they overlap arbitrary decisions on wording and sub structuring make integration difficult  Significant difficulties for those who routinely combine information from multiple biological domains and technology platforms —Example: An investigation looking at the impact of toxins on a sentinel species using proteomics (‘eco-toxico-proteomics’) —What reporting standard(s) should they be using?

16 The MIBBI Project (mibbi.org)  International collaboration between communities developing ‘Minimum Information’ (MI) checklists  Two distinct goals (Portal and Foundry) —Raise awareness of various minimum reporting specifications —Promote gradual integration of checklists  Lots of enthusiasm (drafters, users, funders, journals)  31 projects committed (to the portal) to date, including: —MIGS, MINSEQE & MINIMESS (genomics, sequencing) —MIAME (μarrays), MIAPE (proteomics), CIMR (metabolomics) —MIGen & MIQAS (genotyping), MIARE (RNAi), MISFISHIE (in situ)

17 Nature Biotechnol 26(8), 889–896 (2008) http://dx.doi.org/10.1038/nbt.1411

18 The MIBBI Project (www.mibbi.org)

19

20 Interaction graph for projects (line thickness & colour saturation show similarity)

21 The MIBBI Project (www.mibbi.org)

22 ‘Pedro’ tool → XML → (via XSLT) Wiki code (etc.)

23 MICheckout: Supporting Users

24 Minimum Information guidelines: Progress on uptake  MIAME is the earliest of the ‘new generation’ of guidelines —Supported by ArrayExpress/GEO —Required by many journals  CONSORT (www.consort-statement.org & www.equator- network.org) —Required by many journals ( N.B., no databases per se)  Other guidelines recommended for consideration —Individually (e.g., MIMIx, MIFlowCyt [ NPG ]) —Via MIBBI (BMC, Science [soon], OMICS, others coming too)  Many funders recommend use of ‘accepted’ community standards  But… Uptake is closer to nil for projects lacking supporting resources —Case in point: MIAPE (no usage until a web tool appeared)

25 ICS: overlapping guidelines registered at the MIBBI Portal  The study sample (potentially described in header metadata) —CIMR (human samples, cell culture) —MIFlowCyt (cell counting/sorting) —MIACA, MIATA (cell-based assays)  The assay —MIACA (cell-based assays) —Some general overlap (software, processing)  Image analysis —Some general overlap (image data [ MIAPE ] & statistics)

26 Tools?

27 Download similar studies From theory to practice: tools for the community Experiments EXPERIMENTALIST Java standalone components, for local installation that can work independently, or as unified system

28 The International Conference on Systems Biology (ICSB), 22-28 August, 2008 Susanna-Assunta Sansone www.ebi.ac.uk/net-project 28 Example of guiding the experimentalist to search and select a term from the EnvO ontology, to describe the ‘habitat’ of a sample Ontologies, accessed in real time via the Ontology Lookup Service and BioPortal

29 Spreadsheet functionalities, including: move, add, copy, paste, undo, redo and right click options

30 Groups of samples are colour coded

31 The International Conference on Systems Biology (ICSB), 22-28 August, 2008 Susanna-Assunta Sansone www.ebi.ac.uk/net-project 31 public instance deployed @ EBI

32


Download ppt "Reporting structures for Image Cytometry: Context and Challenges Chris Taylor, EMBL-EBI & NEBC MIBBI [www.mibbi.org] HUPO Proteomics."

Similar presentations


Ads by Google