Presentation is loading. Please wait.

Presentation is loading. Please wait.

Abstrackr open-source, web-based software for (semi-automated?!) abstract-screening This project is currently supported by funding from AHRQ, grant number.

Similar presentations


Presentation on theme: "Abstrackr open-source, web-based software for (semi-automated?!) abstract-screening This project is currently supported by funding from AHRQ, grant number."— Presentation transcript:

1 abstrackr open-source, web-based software for (semi-automated?!) abstract-screening This project is currently supported by funding from AHRQ, grant number R01HS018494

2 Abstract screening an unpleasant necessity
An image of a reviewer sitting down to screen a large pile of abstracts stacked on the table in front of him. He is visibly unhappy with having to undertake this task.

3 abstrackr makes life easier!
1 upload citations PMIDs, RefMan XML, RIS, tab-delimited files 2 invite collaborators 3 screen abstrackr handles allocation and prioritizes screening citations likely to be relevant 4 export screening decisions

4 But wait, there’s more! single- or double-screening
all labels are exported; you can see who screened which abstracts resolve conflicts (if screeners disagree) see how many unscreened abstracts are likely to be relevant add notes about studies and/or tags to them

5 A screenshot of the abstrackr web application
A screenshot of the abstrackr web application. An abstract (which happens to be about vampires) is centered in the browser window. The 'abstract panel' comprises the text of the abstract and some meta-data such as keywords and authors associated with the current citation. There are controls surrounding the abstract that allow the user to annotate it. Specifically, to the left of the panel displaying the abstract there is a 'tags' box, which includes one tag that has already been entered (it reads 'vampires'). There are two buttons beneath this tag: a 'tag study' button and an 'edit tags' button. Beneath the abstract panel, there are three buttons that allow the user to mark an abstract as an accept a reject or a 'borderline' (maybe) case. Beneath these buttons there is a text box for the user to enter relevant 'terms' that suggest a document should be included (or excluded). These terms will be highlighted in any abstracts in which they occur; terms indicative of inclusion will be colored green, while those designated as indicative of exclusion will be red. Finally, there are two buttons above the abstract that allow the user to review the labels she has provided so far or to review the terms that she has provided.

6 make screening decision
tags Same as SLIDE 5, but now annotated with arrows to point to the bits of functionality described above. make screening decision words of interest

7 unstructured and structured (PICO) note-taking
This slide shows the user interface for 'note-taking'. A notes 'pop-up' is placed in front of the abstract panel. This comprises several text boxes, including one labeled 'general notes' and one for each PICO (Population, Intervention, Comparator, Outcome) elements. Users may take structured notes (general) or unstructured notes (for the PICO elements). unstructured and structured (PICO) note-taking

8 A few words on semi-automating screening via machine learning

9 An illustration of the work-flow in the abstrackr program
An illustration of the work-flow in the abstrackr program. The figure includes a cartoonish image of a contemplative person (reviewer) deciding if an abstract is relevant or not. This reviewer began screening by issuing an initial query to a database, as indicated by an arrow pointing from the user to a database icon representative of PubMed. An arrow then points from the database to a pile of documents which in turn points to a block representing the abstrackr program: this indicates that the documents have been exported from PubMed into abstrackr. An iterative process between the reviewer and abstrackr is then depicted; the reviewer makes screening decisions which are fed into the system, and the system then selects the next citation he should label.

10 Semi-Automating Abstract Screening via Supervised Machine Learning
manually screened abstracts Slide 10 depicts the "supervised machine learning paradigm" in machine learning. The (human) expert labels a sample of the data to be classified, and this labeled data is used to induce a classification model. This process begins with an unlabeled set of data (in our case, biomedical abstracts) and the expert then begins screening. Whatever the expert has screened is used to build a (predictive) classification model. SVM doctor conducting review abstracts from PubMed search

11 Results (Updating Reviews)
We can achieve 100% recall while substantially reducing workload This is a plot of results achieved using machine learning to semi-automate screening for updating an existing systematic review. The plot is in ROC space, so the x-axis is specificity, the y-axis is sensitivity. There are four points on the plot, representing the performance of classification system on four datasets (AlzGene, PDGene, SZGene and CEARegistry). The points representing the first three of these are at 100% sensitivity (very top) and ~90% specificity (very near the left), the other (the point representing CEARegistry) is at 99% sensitivity and ~75% specificity.

12 This is another screen shot in abstrackr that demonstrates the integration of the machine learning component with the system as a whole. Specifically, this shows a page accessible for reviews that plots a histogram of the predicted likelihoods of the remaining abstracts being relevant, according to the machine learning model. The y-axis is the (predicted) probability of inclusion and the x-axis comprises the 'bins'. The density is centered around .2-.3, indicating that most of the remaining studies in this review are probably going to be excluded.

13 And now a brief demo http://abstrackr.tuftscaes.org [1]
[1] the url will change to Brown soon, but we’ll re-direct


Download ppt "Abstrackr open-source, web-based software for (semi-automated?!) abstract-screening This project is currently supported by funding from AHRQ, grant number."

Similar presentations


Ads by Google