Taming Information Overload Carlos Guestrin Khalid El-Arini Dafna Shahaf Yisong Yue.

Slides:



Advertisements
Similar presentations
Recommender System A Brief Survey.
Advertisements

Automatic Timeline Generation from News Articles Josh Taylor and Jessica Jenkins.
Google News Personalization: Scalable Online Collaborative Filtering
Topic models Source: Topic models, David Blei, MLSS 09.
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
1.Accuracy of Agree/Disagree relation classification. 2.Accuracy of user opinion prediction. 1.Task extraction performance on Bing web search log with.
Online learning, minimizing regret, and combining expert advice
Turning Down the Noise in the Blogosphere Khalid El-Arini, Gaurav Veda, Dafna Shahaf, Carlos Guestrin.
Trust and Profit Sensitive Ranking for Web Databases and On-line Advertisements Raju Balakrishnan (Arizona State University)
Beyond Keyword Search: Discovering Relevant Scientific Literature Khalid El-Arini and Carlos Guestrin August 22, 2011 TexPoint fonts used in EMF. Read.
Connecting the Dots Between News Articles Dafna Shahaf and Carlos Guestrin.
Linear Submodular Bandits and their Application to Diversified Retrieval Yisong Yue (CMU) & Carlos Guestrin (CMU) Optimizing Recommender Systems Every.
Connecting the Dots Between News Articles Dafna Shahaf and Carlos Guestrin.
Search Engines and Information Retrieval
Nisha Ranga TURNING DOWN THE NOISE IN BLOGOSPHERE.
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
6/16/20151 Recent Results in Automatic Web Resource Discovery Soumen Chakrabartiv Presentation by Cui Tao.
Blogosphere  What is blogosphere?  Why do we need to study Blog-space or Blogosphere?
Latent Dirichlet Allocation a generative model for text
INFERRING NETWORKS OF DIFFUSION AND INFLUENCE Presented by Alicia Frame Paper by Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Kraus.
1 Ranked Queries over sources with Boolean Query Interfaces without Ranking Support Vagelis Hristidis, Florida International University Yuheng Hu, Arizona.
Presented by Zeehasham Rasheed
Ranking by Odds Ratio A Probability Model Approach let be a Boolean random variable: document d is relevant to query q otherwise Consider document d as.
Scalable Text Mining with Sparse Generative Models
Online Search Evaluation with Interleaving Filip Radlinski Microsoft.
Hierarchical Exploration for Accelerating Contextual Bandits Yisong Yue Carnegie Mellon University Joint work with Sue Ann Hong (CMU) & Carlos Guestrin.
Information Retrieval in Practice
Query session guided multi- document summarization THESIS PRESENTATION BY TAL BAUMEL ADVISOR: PROF. MICHAEL ELHADAD.
Metro Maps of Dafna Shahaf Carlos Guestrin Eric Horvitz.
Trains of Thought: Generating Information Maps Dafna Shahaf, Carlos Guestrin and Eric Horvitz.
Search Engines and Information Retrieval Chapter 1.
Genetic Regulatory Network Inference Russell Schwartz Department of Biological Sciences Carnegie Mellon University.
Topic Models in Text Processing IR Group Meeting Presented by Qiaozhu Mei.
Adaptive News Access Daniel Billsus Presented by Chirayu Wongchokprasitti.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
Department of Chemical Engineering Project IV Lecture 3: Literature Review.
1 1 Stanford University 2 MPI for Biological Cybernetics 3 California Institute of Technology Inferring Networks of Diffusion and Influence Manuel Gomez.
1 1 Stanford University 2 MPI for Biological Cybernetics 3 California Institute of Technology Inferring Networks of Diffusion and Influence Manuel Gomez.
Glasgow 02/02/04 NN k networks for content-based image retrieval Daniel Heesch.
Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1.
Topic Modelling: Beyond Bag of Words By Hanna M. Wallach ICML 2006 Presented by Eric Wang, April 25 th 2008.
윤언근 DataMining lab.  The Web has grown exponentially in size but this growth has not been isolated to good-quality pages.  spamming and.
MODULE: INFORMATION QUALITY: RESEARCH METHODS DRAFT 23 Everett Street Cambridge, MA 02138, USA
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Maximizing the Spread of Influence through a Social Network Authors: David Kempe, Jon Kleinberg, É va Tardos KDD 2003.
Carnegie Mellon Novelty and Redundancy Detection in Adaptive Filtering Yi Zhang, Jamie Callan, Thomas Minka Carnegie Mellon University {yiz, callan,
School of Computer Science 1 Information Extraction with HMM Structures Learned by Stochastic Optimization Dayne Freitag and Andrew McCallum Presented.
CS246 Latent Dirichlet Analysis. LSI  LSI uses SVD to find the best rank-K approximation  The result is difficult to interpret especially with negative.
Data Mining and Decision Support
Automatic Labeling of Multinomial Topic Models
KAIST TS & IS Lab. CS710 Know your Neighbors: Web Spam Detection using the Web Topology SIGIR 2007, Carlos Castillo et al., Yahoo! 이 승 민.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Machine Learning in Practice Lecture 10 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
1 Structure Learning (The Good), The Bad, The Ugly Inference Graphical Models – Carlos Guestrin Carnegie Mellon University October 13 th, 2008 Readings:
1 1 Stanford University 2 MPI for Biological Cybernetics 3 California Institute of Technology Inferring Networks of Diffusion and Influence Manuel Gomez.
CS791 - Technologies of Google Spring A Web­based Kernel Function for Measuring the Similarity of Short Text Snippets By Mehran Sahami, Timothy.
Topic Modeling for Short Texts with Auxiliary Word Embeddings
Inferring Networks of Diffusion and Influence
Queensland University of Technology
Recommendation in Scholarly Big Data
Probabilistic Data Management
Finding Story Chains in Newswire Articles
Topic Models in Text Processing
Connecting the Dots Between News Article
CS249: Neural Language Model
Presentation transcript:

Taming Information Overload Carlos Guestrin Khalid El-Arini Dafna Shahaf Yisong Yue

Search Limitations InputOutput Interaction New query

Our Approach InputOutput Interaction Phrase complex information needs Structured, annotated output New queryRicher forms of interaction

Millions of blog posts published every day Some stories become disproportionately popular Hard to find information you care about

Our goal: coverage Turn down the noise in the blogosphere select a small set of posts that covers the most important stories January 17, 2009

Our goal: coverage Turn down the noise in the blogosphere select a small set of posts that covers the most important stories

posts selected without personalization Our goal: personalization Tailor post selection to user tastes After personalization based on Zidane’s feedback But, I like sports! I want articles like:

TDN outline [El-Arini, Veda, Shahaf, G. ‘09] Coverage: Formalize notion of covering the blogosphere Near-optimal solution for post selection Evaluate on real blog data and compare against: Personalization: Learn a personalized coverage function Algorithm for learning user preferences using limited feedback Evaluate on real blog data

Approach Overview Blogosphere … Feature Extraction Coverage Function Post Selection Personalization

Document Features Low level Words, noun phrases, named entities e.g., Obama, China, peanut butter High level e.g., Topics Topic = probability distribution over words Inauguration TopicNational Security Topic

Coverage Function cover ( ) = amount by which covers cover ( ) = amount by which {, } covers … Features Posts … Post dFeature f cover d (f) Set A Feature f cover A (f) Some features more important than others  e.g., weigh by frequency A post never covers a feature completely  use soft notion of coverage, e.g., prob. at least one post in A covers feature f A post never covers a feature completely  use soft notion of coverage, e.g., prob. at least one post in A covers feature f

Objective Function for Post Selection Want to select a set of posts that maximizes Maximizing is NP-hard! probability that set A covers feature f is submodular Advantage( ) ) Greedy algorithm  (1-1/e)-approximation CELF Algorithm  very fast, same guarantees Greedy algorithm  (1-1/e)-approximation CELF Algorithm  very fast, same guarantees feature set weights on features Posts shown to user

Approach Overview Blogosphere Feature Extraction Coverage Function Post Selection Submodular function optimization Personalization

Evaluating Coverage Evaluate on real blog data from Spinn3r 2 week period in January 2009 ~200K posts per day (after pre-processing) Two variants of our algorithm User study involving 27 subjects to evaluate: TDN+LDA: High level features Latent Dirichlet Allocation topics TDN+NE: Low level features Topicality & Redundancy

Topicality = Relevance to current events … Reference Stories (ground truth) Post for evaluation Downed jet lifted from ice- laden Hudson River NEW YORK (AP) - The airliner that was piloted to a safe emergency landing in the Hudson… Is this post topical? i.e., is it related to any of the major stories of the day?

User study: Are the stories important that day? LDA topics as features Named entities and common noun phrases as features Higher is better TDN +NE TDN +LDA We do as well as Yahoo! and Google

Evaluation: Redundancy 1.Israel unilaterally halts fire as rockets persist 2.Downed jet lifted from ice-laden Hudson River 3.Israeli-trained Gaza doctor loses three daughters and niece to IDF tank shell Is this post redundant with respect to any of the previous posts?

User study: Are the stories redundant with each other? Lower is better TDN +LDA TDN +NE Google performs poorly We do as well as Yahoo! Google performs poorly We do as well as Yahoo!

Higher is better TDN +LDA TDN +NE User study summary Google: good topicality, high redundancy Yahoo!: performs well on both, but uses rich features CTR, search trends, user voting, etc. TopicalityRedundancy Lower is better TDN +LDA TDN +NE TDN performs as well as Yahoo! using only post content TDN performs as well as Yahoo! using only post content

TDN outline Coverage: Formalize notion of covering the blogosphere Near-optimal solution for post selection Evaluate on real blog data and compare against: Personalization: Learn a personalized coverage function Algorithm for learning user preferences using limited feedback Evaluate on real blog data

Personalization People have varied interests Our Goal: Learn a personalized coverage function using limited user feedback Barack Obama Britney Spears

Personalize postings Blogosphere coverage function personalization post selection post selection personalized coverage fn. personalized post selection learn your coverage function online learning of submodular function problem algorithm with strong theoretical guarantees

Modeling User Preferences represents user preference for feature f Want to learn preference over the features for a sports fan for a politico User preference Importance of feature in corpus

Exploration vs Exploitation Goal: want to recommend content that users like Exploiting previous feedback from users However: users only provide feedback on recommended content Explore to collect feedback for new topics Solution: algorithm to balance exploration vs exploitation Linear Submodular Bandits Problem

Balancing Exploration & Exploitation [Yue & Guestrin, 2011] For each slot, maximize trade-off (pick article about Tennis) Estimated coverage gain Uncertainty of estimate Mean Estimate by Topic Uncertainty of Estimate

After 1 day of personalization Least Squares Update Least Squares Update Learning User Preferences (Approach 2) [Yue & Guestrin, 2011] Before any feedback After 2 days of personalization Explore-Exploit Tradeoff Explore-Exploit Tradeoff Explore-Exploit Tradeoff Theorem: For Bandit Alg, Learned π (1-1/e) avg( ) – avg( ) 0 i.e., we achieve no-regret Learns a good approximation of the true π* Rate: 1/sqrt(kT), recommending k documents, T rounds True π*

User Study: Online Learning for News Recommendation 27 users in study Submodular Bandits Wins Static Weights Submodular Bandits Wins Ties Losses Multiplicative Updates (no exploration) Submodular Bandits Wins Ties Losses No-Diversity Bandits

We Synthesist [Guestrin, Khan ’09]

Our Approach InputOutput Interaction Phrase complex information needs Structured, annotated output Richer forms of interaction what about more complex information needs?

A Prescient Warning Today: 10 7 papers in 10 5 conferences/journals* * Thomson Reuters Web of Knowledge How do we cope? As long as the centuries…unfold, the number of books will grow continually… as convenient to search for a bit of truth concealed in nature as to find it hidden away in an immense multitude of bound volumes -Dennis Diderot, Encyclopédie (1755)

Motivation (1) Is there an approximation algorithm for the submodular covering problem that doesn’t require an integral-valued objective function? Any recent papers influenced by this?

Motivation (2) It’s 11:30pm Samoa Time. Your “Related Work” section is a bit sparse. Here are some papers we’ve cited so far. Anything else? Here are some papers we’ve cited so far. Anything else?

Given a set of relevant query papers, what else should I read?

An example query set seminal/background paper? a competing approach? Cited by all query papers Cites all query papers However, unlikely to find papers directly connected to entire query set. We need something more general… However, unlikely to find papers directly connected to entire query set. We need something more general…

Select a set of papers A with maximum influence to/from the query set Q

Modeling influence Ideas flow from cited papers to citing papers

Influence context Why do I cite this paper? generative model of text variational inference EM … we call these concepts

Concept representation Words, phrases or important technical terms Proteins, genes, or other advanced features Our assumption: Influence always occurs in the context of concepts Our assumption: Influence always occurs in the context of concepts

Influence by Word Influence with respect to word: bayesian Probabilistic Latent Semantic Indexing Probabilistic Latent Semantic Indexing Modern Information Retrieval Modern Information Retrieval Text Classification… Using EM Text Classification… Using EM Introduction to Variational Methods for Graphical Models Introduction to Variational Methods for Graphical Models Latent Dirichlet Allocation

Influence by Word Influence with respect to word: simplex Probabilistic Latent Semantic Indexing Probabilistic Latent Semantic Indexing Modern Information Retrieval Modern Information Retrieval Text Classification… Using EM Text Classification… Using EM Introduction to Variational Methods for Graphical Models Introduction to Variational Methods for Graphical Models Latent Dirichlet Allocation

Influence by Word Influence with respect to word: mixture Probabilistic Latent Semantic Indexing Probabilistic Latent Semantic Indexing Modern Information Retrieval Modern Information Retrieval Text Classification… Using EM Text Classification… Using EM Introduction to Variational Methods for Graphical Models Introduction to Variational Methods for Graphical Models Latent Dirichlet Allocation mixture Weight on edge (u,v) in G c represents strength of direct influence from u to v What is the chance that a random instance of concept c in v came about as a result of u? Weight on edge (u,v) in G c represents strength of direct influence from u to v What is the chance that a random instance of concept c in v came about as a result of u?

Influence: Definition x y … … … Edge (u, v) in G c active with probability Influence c (x  y): probability exists path in G c between x and y, consisting of active edges #P-complete, but efficient approximation using sampling or dynamic programming

Maximize F Q (A) s.t. |A| ≤ k (output k papers) Submodular optimization problem, solved efficiently and near-optimally Putting it all together prevalence of c in paper q query set probability of influence between q and at least one paper in A selected papers concept set

But should all users get the same results?

Personalized trust Different communities trust different researchers for a given concept Goal: Estimate personalized trust from limited user input e.g., network KleinbergHintonPearl

Specifying trust preferences Specifying trust should not be an onerous task Assume given (nonexhaustive!) set of trusted papers B, e.g., a BibTeX file of all the researcher’s previous citations a short list of favorite conferences and journals someone else’s citation history! a committee member? journal editor? someone in another field? a Turing Award winner?

Computing trust How much do I trust Jon Kleinberg with respect to the concept “network”? B Kleinberg’s papers An author is trusted if he/she influences the user’s trusted set B

Personalized Objective Extra weight in Influence: Does user trust at least one of authors of d with respect to concept c? probability of influence between q and at least one paper in A

Personalized Recommendations general recommendations: personalized recommendations

User Study Evaluation 16 PhD students in machine learning For each: Selected a recent publication of participant — the study paper — for which we find related work Two variants of our methodology (w/ and w/o trust) Three state-of-the-art alternatives: Relational Topic Model (generative model of text and links) [Chang, Blei ‘10] Information Genealogy (uses only document text) [Shaparenko, Joachims ‘07] Google Scholar (based on keywords provided by coauthor) Double blind study where participant provided with title/author/abstract of one paper at a time, and asked several questions

Usefulness our approach higher is better On average, our approach provides more useful and more must-read papers than comparison techniques

Trust our approach higher is better On average, our approach provides more trustworthy papers than comparison techniques, especially when incorporating participant’s trust preferences

Novelty our approach On average, our approach provides more familiar papers than comparison techniques, especially when incorporating participant’s trust preferences

Diversity In pairwise comparison, our approaches produce more diverse results than the comparison techniques us them

Our Approach InputOutput Interaction Phrase complex information needs Structured, annotated output Richer forms of interaction what about structured outputs?

Connecting the Dots: News Domain

Input: Pick two articles (start, goal) Output: Bridge the gap with a smooth chain of articles Input: Pick two articles (start, goal) Output: Bridge the gap with a smooth chain of articles Input: Pick two articles (start, goal) InputOutput Interaction InputOutput Interaction Bailout Housing Bubble

Keeping Borrowers Afloat A Mortgage Crisis Begins to Spiral,... Investors Grow Wary of Bank's Reliance on Debt Markets Can't Wait for Congress to Act Bailout Plan Wins Approval Housing Bubble Bailout InputOutput Interaction Input: Pick two articles (start, goal) Output: Bridge the gap with a smooth chain of articles Input: Pick two articles (start, goal) Output: Bridge the gap with a smooth chain of articles

Connecting the Dots [Shahaf, G. ‘10]

What is a Good Chain? What’s wrong with shortest-path? Build a graph – Node for every article – Edges based on similarity Chronological order (DAG) – Run BFS s t

Shortest-path A1: alks Over Ex-Intern's Testimony On Clinton Appear to Bog A2: Judge Sides with the Government in Microsoft Antitrust Trial A3: Who will be the Next Microsoft? – trading at a market capitalization… A4: Palestinians Planning to Offer Bonds on Euro. Markets A5: Clinton Watches as Palestinians Vote to Rescind 1964 Provision A6: ontesting the Vote: The Overview; Gore asks Public For Lewinsky Florida recount Talks Over Ex-Intern's Testimony On Clinton Appear to Bog Down Contesting the Vote: The Overview; Gore asks Public For Patience;

Shortest-path A1: alks Over Ex-Intern's Testimony On Clinton Appear to Bog A2: Judge Sides with the Government in Microsoft Antitrust Trial A3: Who will be the Next Microsoft? – trading at a market capitalization… A4: Palestinians Planning to Offer Bonds on Euro. Markets A5: Clinton Watches as Palestinians Vote to Rescind 1964 Provision A6: ontesting the Vote: The Overview; Gore asks Public For Talks Over Ex-Intern's Testimony On Clinton Appear to Bog Down Contesting the Vote: The Overview; Gore asks Public For Patience;

Shortest-path A1: A2: Judge Sides with the Government in Microsoft Antitrust Trial A3: Who will be the Next Microsoft? – trading at a market capitalization… A4: Palestinians Planning to Offer Bonds on Euro. Markets A5: Clinton Watches as Palestinians Vote to Rescind 1964 Provision A6: Talks Over Ex-Intern's Testimony On Clinton Appear to Bog Down Contesting the Vote: The Overview; Gore asks Public For Patience; Stream of consciousness? - Each transition is strong - No global theme Stream of consciousness? - Each transition is strong - No global theme

More-Coherent Chain Lewinsky Florida recount Talks Over Ex-Intern's Testimony On Clinton Appear to Bog Down Contesting the Vote: The Overview; Gore asks Public For Patience; B1: B2: Clinton Admits Lewinsky Liaison to Jury B3: G.O.P. Vote Counter in House Predicts Impeachment of Clinton B4: Clinton Impeached; He Faces a Senate Trial B5: Clinton’s Acquittal; Senators Talk About Their Votes B6: Aides Say Clinton Is Angered As Gore Tries to Break Away B7: As Election Draws Near, the Race Turns Mean B8:

More-Coherent Chain Talks Over Ex-Intern's Testimony On Clinton Appear to Bog Down Contesting the Vote: The Overview; Gore asks Public For Patience; B1: B2: Clinton Admits Lewinsky Liaison to Jury B3: G.O.P. Vote Counter in House Predicts Impeachment of Clinton B4: Clinton Impeached; He Faces a Senate Trial B5: Clinton’s Acquittal; Senators Talk About Their Votes B6: Aides Say Clinton Is Angered As Gore Tries to Break Away B7: As Election Draws Near, the Race Turns Mean B8: What makes it coherent?

For Shortest Path Chain Topic changes every transition (jittery) Word Patterns

For Coherent Chain Topic consistent over transitions Use this intuition to estimate coherence of chains

Strong transitions between consecutive documents d1 w1: Lewinsky w2: Clinton w5: Microsoft w4: Intern w3: Oath min(4,3,1)=1 d2d3 d4

Strong transitions between consecutive documents min(4,3,1)=1 Too coarse – Word importance in transition – Missing words ??? Intuitively, high iff d i and d i+1 very related w plays an important role in the relationship Intuitively, high iff d i and d i+1 very related w plays an important role in the relationship

Influence with No Links! min(4,3,1)=1 Intuitively, high iff d i and d i+1 very related w plays an important role in the relationship Intuitively, high iff d i and d i+1 very related w plays an important role in the relationship Most methods assume edges – Influence propagates through the edges No edges in our dataset New Method: measure influence of d i on d i+1 WRT word w without links! using cool random walks idea New Method: measure influence of d i on d i+1 WRT word w without links! using cool random walks idea

Global Theme, No Jitter Jittery chain can score well! But need a lot of words… Good chains can often be represented by a small number of segments

Global Theme, No Jitter Choose 3 segments to be scored on Good score Score = 0

Coherence: New Objective Maximize over legal activations: – Limit total number of active words – Limit number of words per transition – Each word to be activated at most once

Finding a good chain Problem is NP-Complete Can’t brute-force – n d possible chains: >>10 20 after pruning Softer notion of activation: [0,1] Natural formalization as a linear program (LP) s23t next(s,3) next(s,2) next(s,t)

LP doesn’t output chain, but edge weights in [0,1]  we need to round Approximation guarantees – Chain length K in expectation – Objective: O(sqrt(ln(n/  )) with probability 1-  Rounding s23t

Evaluation: Competitors Shortest path Google Timeline Enter a query Pick k equally-spaced articles Event threading (TDT) [Nallapati et al ‘04] Generate cluster graph Representative articles from clusters

Example Chain (1) Simpson Strategy: There Were Several Killers O.J. Simpson's book deal controversy CNN OJ Simpson Trial News: April Transcripts Tandoori murder case a rival for OJ Simpson case Google News Timeline Simpson trial Simpson verdict

Example Chain (2) Issue of Racism Erupts in Simpson Trial Ex-Detective's Tapes Fan Racial Tensions in LA Many Black Officers Say Bias Is Rampant in LA Police Force With Tale of Racism and Error, Lawyers Seek Acquittal Connect-the-Dots Simpson trial Simpson Verdict

Evaluation #1: Familiarity 18 users Show two articles – 5 news stories – Before and after reading the chain Do you know a coherent story linking these articles?

Average fraction of gap closed Better Base familiarity: Effectiveness (improvement in familiarity)

Average fraction of gap closed Better Base familiarity: Effectiveness (improvement in familiarity)

Average fraction of gap closed Better Base familiarity: Effectiveness (improvement in familiarity)

Average fraction of gap closed Better Base familiarity: Effectiveness (improvement in familiarity)

Average fraction of gap closed Base familiarity: Better Significant improvement in 4/5 stories

Evaluation #2: Chain Quality Compare two chains for – Coherence – Relevance – Redundancy  

Relevance, Coherence, Non- redundancy Across complex stories Better CoherenceRelevanceNon-redundancy

Relevance, Coherence, Non- redundancy Across complex stories Better CoherenceRelevanceNon-redundancy

Relevance, Coherence, Non- redundancy Across complex stories Better CoherenceRelevanceNon-redundancy

Relevance, Coherence, Non- redundancy Across complex stories Better CoherenceRelevanceNon-redundancy

What’s left? Interaction Two documents Chain

Interaction Types 1.Refinement 2.User interests d1 d2 d3d4 ???

Interaction … Defense cross-examines state DNA expert With fiber evidence, prosecution … Simpson Defense Drops DNA Challenge Simpson Verdict … A day the country stood still In the joy of victory, defense team in discord … Many black officers say bias Is rampant in LA police force Racial split at the end … Verdict Race Blood, glove Algorithmic ideas from online learning

Interaction User Study Refinement – 72% prefer chains refined by our method over baseline approach User Interests – 63.3% able to reverse engineer selected concepts which out of the following 10 words were used to personalize chain? 2 correct words out of 10

Where We Are Going… InputOutput Interaction Phrase complex information needs Structured, annotated output Richer forms of interaction can we provide more structure than chains?

An example of a structured view query select important documents selected, structured, annotated relations

An example of a structured view machines can’t have emotions is supported by concept of feeling only applies to living organisms [Ziff ‘59] we can imagine artifacts that have feelings [Smart ‘59] is disputed by deeper understanding  address information overload challenge: build structured view automatically! deeper understanding  address information overload challenge: build structured view automatically!

Trains of Thought Given a set of documents Show important pieces of information … and how they relate austerity bailout junk status protests strike Germany labor unions Merkel

What makes a good map? 1.Coherence 2.Coverage 3.Connectivity

Approach overview Documents D … 1. Coherence graph G Encodes all coherent chains as graph paths 2. Coverage function f f( ) = ? 3. Find high-coverage, high-connectivity paths Find a set of paths that maximize coverage

User Study Three stories: – Chile miners – Haiti earthquake – Greek debt crisis

Task: Effectiveness 10-question form Answer it once at the beginning… and again while interacting with Maps / Google News/ Maps without graph structure How fast do people acquire knowledge?

Effectiveness: Results Rate of closing the knowledge-gap: minutes % closing the gap Structureless > Google Due to content selection Structure > Structureless Especially for complex stories

Taming Information Overload InputOutput Interaction Phrase complex information needs Structured, annotated output Richer forms of interaction set of query papers, end points of chain set of query papers, end points of chain smooth chain connecting the dots, metro maps, issue maps smooth chain connecting the dots, metro maps, issue maps efficient algorithms with theoretical guarantees like/dislike (online learning) bibtex file (trust) feedback on concepts like/dislike (online learning) bibtex file (trust) feedback on concepts multiple user studies  promising direction for taming challenge of information overload