Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 10 Social Search. n “Social search describes search acts that make use of social interactions with others. These interactions may be explicit.

Similar presentations


Presentation on theme: "Chapter 10 Social Search. n “Social search describes search acts that make use of social interactions with others. These interactions may be explicit."— Presentation transcript:

1 Chapter 10 Social Search

2 n “Social search describes search acts that make use of social interactions with others. These interactions may be explicit or implicit, co-located or remote, synchronous or asynchronous” Social search  Search within a social environment  Communities of users actively participating in the search process  Goes beyond classical search tasks  Facilitates the “information seeking” process [Evans 08] 2 [Evans 08] Evans et al. Towards a Model of Understanding Social Search. In Proc. of Conf. on Computer Supported Cooperative Work. 2008.

3 Social vs. Standard Search n Key differences  Users interact with the system  Users interact with one another in a open/social environment implicitly/explicitly such as Visiting social media sites, e.g., YouTube Browsing through social networking sites, e.g., Facebook 3

4 Social Search Activities n Social search activities, as defined in [Evans 08] 4 Stage where user motives and information needs are defined Stage where user search requirements are refined

5 Social Search Activities 5 Users perform a series of actions to identify content from a particular location Users locate a source where they can perform a transaction or web-mediated activity Exploratory process Users search for information within a specific patch followed by extracting information from source files Users identify preliminary “evidence files” from which they further modify their search schema and query

6 Social Search Activities 6 Schematize process where raw evidence is organized/represented in some schematic way Distribute the end product to others, either face-to-face, by printing out docs, or bookmark websites for re-accessing in the future

7 Web 2.0 Social search includes, but is not limited to, the so- called social media site  Collectively referred to as “Web 2.0” as opposed to the classical notion of the Web (“Web 1.0”) Social media sites  User generated content  Users can tag their own and other’s content  Users can share favorites, tags, etc., with others  Provide unique data resources for search engines n Example.  YouTube, MySpace, Facebook, LinkedIn, Digg, Twitter, Flickr, Del.icio.us, and CiteULike 7

8 Social Search Topics n Online user-interactive data, which provide a new and interesting search experience  User tags: users assign tags to data items, a manual indexing approach  Searching within communities: virtual groups of online users, who share common interests, interact socially, such as blogs and QA systems  Recommender systems: individual users are represented by their profiles (fixed queries – long-term info. need) such as CNN Alert Service, Amazon.com, etc.  Peer-to-peer Network: querying a community of “nodes” (individual/organization/search engine) for an info. need  Metasearch: a special case of P2P – all the nodes are SEs 8

9 User Tags and Manual Indexing n Then: Library card catalogs  Indexing terms chosen with search in mind  Experts generate indexing terms manually  Terms are very high quality based on the Library of Congress Subject Headings standardized by the US Library of Congress  Terms chosen from controlled/fixed vocabulary and subject guides (a drawback) Now: Social media tagging  Social media sites allow users to generate own tags manually (+)  Tags not always chosen with search in mind (-)  Tags can be noisy or even incorrect and without quality control (-)  Tags chosen from folksonomies, user-generated taxonomies 9

10 Social Tagging n According to [Guan 10]  Social tagging services allow users to annotate online resources with freely chosen keywords  Tags are collectively contributed by users and represent their comprehension of resources.  Tags provide meaningful descriptors of resources and implicitly reflect users’ interests  Tagging services provide keyword- based search, which returns resources annotated by the given tags 10

11 Social Tagging  Rating vs. tagging data [Guan 10]  Tagging data does not contain users’ explicit preference information on resources  Tagging data involves three types of objects, i.e., user, tag, and resource, whereas rating data only contains users and resources 11 [Guan 10] Guan et al. Document Recommendation in Social Tagging Services. In Proc. of Intl. Conf’ on World Wide Web. 2010

12 Types of User Tags n Content-based  Tags describe the content of an item, e.g., car, woman, sky n Context-based  Tags describe the context of an item, e.g., NYC, empire bldg n Attribute-based  Tags describe the attributes of an item, e.g., Nikon (type of camera), black and white (type of movie), etc. n Subjective-based  Tags subjectively describe an item, e.g., pretty, amazing, etc. n Organizational-based  Tags that organize items, e.g., to do, readme, my pictures … 12

13 Searching Tags n Searching collaboratively tagged items, i.e., user tags, is challenging  Most items have only a few tags, i.e., complex items are sparely represented, e.g., “aquariums”  “goldfish”, the vocabulary mismatch problem  Tags are very short n Boolean (AND/OR), probabilistic, vector space, and language modeling will fail if use naïvely 13

14 Tag Expansion n Can overcome vocabulary mismatch problem by expanding tag representation with external knowledge n Possible external sources  Thesaurus  Web search results  Query logs n After tags have been expanded, can use standard retrieval models 14

15 Tag Expansion Using Search Results 15 Age of Aquariums - Tropical Fish Huge educational aquarium site for tropical fish hobbyists, promoting responsible fish keeping internationally since 1997. The Krib (Aquaria and Tropical Fish) This site contains information about tropical fish aquariums, including archived usenet postings and e-mail discussions, along with new... … Keeping Tropical Fish and Goldfish in Aquariums, Fish Bowls, and... Keeping Tropical Fish and Goldfish in Aquariums, Fish Bowls, and Ponds at AquariumFish.net. fish tropical aquariums goldfish bowls P(w | “tropical fish”) Example. Web search results enhance a tag representation, “tropical fish”, a query A retrieved snippet Pseudo-relevance feedback over related terms

16 Searching Tags n Even with tag expansion, searching tags is challenging n Tags are inherently noisy (off topic, inappropriate) and incorrect (misspelled, spam) n Many items may not even be tagged, which become virtually invisible to any search engine n Typically easier to find popular items with many tags than less popular items with few/no tags 16

17 Inferring Missing Tags n How can we automatically tag items with few or no tags? n Uses of inferred tags to  Improved tag search  Automatic tag suggestion 17

18 Methods for Inferring Tags n TF-IDF (based on textual representation of the item)  Suggest tags that have a high TF-IDF weight in the item  Only works for textual items n Classification (determines the appropriateness of a tag)  Train binary classifier for each tag, e.g., using SVM  Performs well for popular tags, but not as well for rare tags n Maximal marginal relevance  Finds relevant tags to the item and novel with respect to others where Sim item (t, i) is the similarity between tag t and item i Sim tag (t i, t) is the similarity between tags t i and t (= 1 or 0), a tunable parameter 18 Large, if t is very relevant to i

19 Browsing and Tag Clouds n Search is useful for finding items of interest n Browsing is more useful for exploring collections of tagged items n Various ways to visualize collections of tags  Tag lists (for a website or particular group/category of items)  Tag clouds (show the popularity of tags based on sizes)  (Tags are) Alphabetically order and/or weighted  Grouped by category  Formatted/sorted according to popularity 19

20 Tag Clouds n As defined in [Schrammel 09], tag clouds are  Visual displays of set of words (tags) in which attributes of the text such as size, color, font weight, or intensity are used to represent relevant properties, e.g., frequency of documents linked to the tag  A good visualization technique to communicate an “overall picture” 20 [Scharammel 09] Schrammel et al. Semantically Structured Tag Clouds: an Empirical Evaluation of Clustered Presentation Approaches. In Proc. of Intl’ Conf’ on Human Factors in Computing Systems. 2009.

21 Sample Tag Cloud 21 animals architecture art australia autumn baby band barcelona beach berlin birthday black blackandwhite blue california cameraphone canada canon car cat chicago china christmas church city clouds color concert day dog england europe family festival film florida flower flowers food france friends fun garden germany girl graffiti green halloween hawaii holiday home house india ireland italy japan july kids lake landscape light live london macro me mexico music nature new newyork night nikon nyc ocean paris park party people portrait red river rock sanfrancisco scotland sea seattle show sky snow spain spring street summer sunset taiwan texas thailand tokyo toronto travel tree trees trip uk usa vacation washington water wedding

22 Searching with Communities n What is an online community?  Groups of entities (i.e., users, organizations, websites) that interact in an online environment to share common goals, interests, or traits  Besides tagging, community users also post to newsgroups, blogs, and other forums  To improve the overall user experiments, web search engines should automatically find the communities of a user n Example.  Baseball fan community, digital photography community, etc. n Not all communities are made up of humans!  Web communities are collections of web pages that are all about a common topic 22

23 Online Communities n According to [Seo 09]  Online communities are valuable information sources where knowledge is accumulated by interactions between people  Online community pages have many unique textual or structural features, e.g., A forum has several sub-forums covering high-level topic categories Each sub-forum has many threads A thread is a more focused topic-centric discussion unit and is composed of posts created by community members 23 [Seo 09] Seo et al. Online Community Search Using Thread Structure. In Proc. of ACM Conf. on Information and Knowledge Management. 2009.

24 Online Communities  Gathering of people, in online space, where they can come, communicate and know each other over time [Chen 08]  A social aggregation that emerges when enough people carry on public discussions over time, with sufficient human feeling, to form webs of personal relationships in cyberspace [Rheingold 00]  Online communities [Chen 08]  Have open membership  A user can reach the remaining ones in the community easily  Shared interests and activities are the major reasons to attract users to join the community  Tagging information used to define the interests of the users 24 [Chen 08] Chen et al. Finding Core Members in Virtual Communities. WWW 2008. [Rheingold 00] H. Rheingold. The Virtual Community: Homesteading on the Electronic Frontier. MIT Press. 2000

25 Finding Communities n How to design general-purpose algorithms for finding every possible type of on-line community? n What are the characteristics of a community?  Entities (users) within a community are similar to each other  Members of a community are likely to interact more with one another of the community than those outside of the community n Can represent interactions between a set of entities as a graph  Vertices (V) are entities  Edges (E), directed or undirected, denote interactions of entities Undirected edges represent symmetric relationships Directed edges represent non-symmetric or causal relationships 25

26 HITS n Hyperlink-induced Topic Search (HITS) algorithm can be used to find communities  A link analysis algorithm, like PageRank  Each entity has a hub and authority score n Based on a circular set of assumptions  Good hubs point to good authorities  Good authorities are pointed to by good hubs n Iterative algorithm: 26 Sum of the hub scores of the entities pointing at p Sum of the authority scores pointed at by p

27 n Form community (C)  Apply the entity interaction graph to find communities  Identify a subset of the entities (V), called candidate entities, be members of C (based on common interest)  Entities with large authority scores are the “core” or “authoritative” members of C to be a strong authority, an entity must have many incoming edges, all with relatively moderate hub scores, or have very few incoming links that have very large hub scores  Vertices not connected with others have hub and authority scores of 0 27 HITS

28 Finding Communities n Clustering  Community finding is an inherently unsupervised learning problem  Agglomerative or K-means clustering approaches can be applied to entity interaction graph to find communities  Use the vector representation to capture the connectivity of various entities  Compute the authority values based on the Euclidean distance n Evaluating community finding algorithms is hard n Can use communities in various ways to improve web search, browsing, expert finding, recommendation, etc. 28

29 Graph Representation 29 1 1 5 5 4 4 3 3 7 7 2 2 6 6 00000000000000 00000100000010 10001101000110 00000100000010 10000001000000 00000000000000 00000000000000 Node: Vector: 1234567 12345671234567

30 Community Based Question Answering n Some complex information needs can’t be answered by traditional search engines  No single webpage may exist that satisfies the information needs  Information may come from multiple sources  Human (non-)experts in a wide range of topics form a community- based question answering (CQA) group, e.g., Yahoo! Answers n Community based question answering tries to overcome these limitations  Searcher enters questions  Community members answer questions 30

31 Example Questions 31

32 Community Based Question Answering n Pros  Users can find answers to complex or obscure questions with diverse opinions about a topic  Answers are from humans, not algorithms, that can be interacted with who share common interests/problems  Can search archive of previous questions/answers, e.g., Yahoo! Answers n Cons  Some questions never get answered  Often takes time (possibly days) to get a response  Answers may be wrong, spam, or misleading 32

33 Community Based Question Answering n Yahoo! Answers, a community-driven question-and- answer site launched by Yahoo! on July 5, 2005Yahoo! 33

34 Question Answering Models n How can we effectively search an archive of question/ answer pairs databases? n Can be treated as a translation problem  Translate a question into a related/similar question which likely have relevant answers  Translate a question into an answer: less desirable n The vocabulary mismatch problem  Traditional IR models likely miss many relevant questions  Many different ways to ask the same question  Stopword removal and stemming do not help  Solution: consider related concepts (i.e., words)–the probability of replacing one word by another 34

35 Question Answering Models n Translation-based language model (for finding related questions and answers): translate w (in Q) from t (in A) where Q is a question A is a related question in the archive V is the vocabulary P(w | t) are the translation probability P(t | A) is the (smoothed) probability of generating t given A  Anticipated problem: a good (independent) term-to-term translation might not yield a good overall translation  Potential solution: matches of the original question terms are given more weight than matches of translated terms 35

36 Question Answering Models n Enhanced translation model, which extends the translation-based language model on ranking Q: where   0.. 1 controls the influence of the translation probability  is a smoothing parameter |A| is the number of words in question A C w is count of w in the entire collection C, and |C| is the total number of word occurrence in C  when   1, the model becomes more similar to the translation- based language model  when   0, the translated question is equivalent to the original question 36

37 Computing Translation Probabilities n Translation probabilities are learned from a parallel corpus n Most often used for learning inter-language probabilities n Can be used for intra-language probabilities  Treat question-answer pairs as parallel corpus  Translation probabilities are estimated from archived pairs (Q 1, A 1 ), (Q 2, A 2 ), …, (Q N, A N ) n Drawbacks  Computationally expensive: sum over the entire vocabulary, which can be very large  Solution: considering only a small number (e.g., 5) of (most likely) translations per question term 37

38 Sample Question/Answer Translations 38

39 Collaborative Searching n Traditional search assumes single searcher n Collaborative search involves a group of users, with a common goal--searching together in a collaborative setting n Example scenarios  Students doing research for a history report  Family members searching for information on how to care for an aging relative  Team member working to gather information and requirements for an industrial project 39

40 Collaborative Search n Two types of collaborative search settings depending on where participants are physically located n Co-located  Participants in same location  Co-Search system (Amershi & Morris, 2008) n Remote collaborative  Participants in different locations  Search-Together system (Morris & Horvitz, 2007) 40

41 Collaborative Search Scenarios 41 Co-located Collaborative Searching Remote Collaborative Searching Example. CoSearch Example. SearchTogether

42 Collaborative Search n Involves a group of users who share a common goal searching together in a collaborative setting  Members contribute, gather, and have a better understanding on the collected information n Challenges  How do users interact with system?  How do users interact with each other?  How is data shared?  What data persists across sessions? n Very few commercial collaborative search systems n Likely to see more of this type of system in the future 42

43 Document Filtering n Ad hoc retrieval  Document collections and information needs change with time  Results (static) returned when query is entered n Document filtering  Document collections change with time, but information needs are static (long-term)  Long term information needs represented as a profile  Documents entering system that match the profile are delivered to the user via a push mechanism  Must be efficient and effective (minimizes FPs and FNs) 43

44 Profiles n Represents long-term information needs and personalized the search experience n Can be represented in different ways  Boolean or keyword query  Sets of relevant and non-relevant documents  Social tags and named entities  Relational constraints “Published before 1990” “Price in the $10 - $25 range” n Actual representation usually depends on the underlying filtering model n Static (filtering) or updated over time (adaptive filtering) 44 Soft filtersHard filters

45 Document Filtering Scenarios 45 Document Stream t = 2t = 3t = 5t = 8 Profile 1 Profile 2 Profile 3 Document Stream t = 2t = 3t = 5t = 8 Profile 1 Profile 2 Profile 3 Profile 1.1 Profile 2.1 Profile 3.1 Static Filtering Adaptive Filtering Easier to process, less robust More robust, requires frequent updates

46 Static Filtering n Given a fixed profile, how can we determine if an incoming document should be delivered? n Treat as an IR problem  Boolean  Vector space  Language modeling n Treat as supervised learning problem  Naïve Bayes  Support vector machines 46 Require predefined threshold value

47 Static Filtering with Language Models n Assume profile consists of K relevant documents (T i ), each with weight α i n Probability of a word w given the profile P is  i is the weight (important) associated with T i f w,T i is the frequency of occurrence of word w in T i is a smoothing parameter C w is count of w in the entire collection C, and |C| is the total number of word occurrence in C 47

48 Adaptive Filtering n In adaptive filtering, profiles are dynamic n How can profiles change?  User can explicitly update the profile  User can provide (relevance) feedback about the documents delivered to the profile  Implicit user behavior can be captured and used to update the profile 48

49 Adaptive Filtering Models n Rocchio  Profiles treated as vectors n Relevance-based language models  Profiles treated as language models 49

50 Fast Filtering with Millions of Profiles n Real filtering systems  May have thousands or even millions of profiles  Many new documents will enter the system daily n How to efficiently filter in such a system?  Most profiles are represented as text or a set of features  Build an inverted index for the profiles  Distill incoming documents as “queries” and run against index 50

51 Evaluation of Filtering Systems n Definition of “good” depends on the purpose of the underlying filtering system  Do not product ranking of documents for each profile  Evaluation measures, such as Precision@n and MAP, are irrelevant; precision, recall, and F-measure are computable n Generic filtering evaluation measure:  = 2,  = 0,  = -1, and  = 0 are widely used 51

52 Summary of Filtering Models 52

53 Collaborative Filtering n Static and adaptive filtering are not social tasks; profiles (and their users) are assumed to be independent of each other n Similar users are likely to have similar preferences or profiles n Collaborative filtering exploits relationships between profile (users) to improve how items (documents) are matched to users (profiles)  If A is similar to B and A judged a document D is relevant, then it is likely that D is also relevant to B  Often used as a component of recommender system 53

54 Collaborative Filtering n According to [Ma 09], there are two widely-used types of methods for collaborative filtering  Neighborhood-based methods Include user-based approaches, which predict the ratings of active users based on the computed information of items similar to those chosen by the active user. Suffer from data sparsity and scalability problems  Model-based methods use the observed user-item ratings to train a compact model that explains the given data so that ratings can be predicted 54 [Ma 09] Ma et al. Semi-nonnegative Matrix Factorization with Global Statistical Consistency for Collaborative Filtering. In Proc. of ACM Conf. on Information and Knowledge Management. 2009.

55 Collaborative Filtering n Example. Predicts the missing values in the user-item matrix [Ma 09] 55 User-item Matrix Predicted User- item Matrix

56 Recommender Systems n Recommender systems use collaborative filtering algorithms to recommend items that a user may be interested in, e.g., Amazon.com, NetFlix n Unlike static/adaptive filtering algorithms, recommender systems provide ratings for items (e.g., 0..1) n Recommender systems  Suggest items (i.e., movies, news) that are likely to interest to users, whereas  Collaborative filtering refers to the technique for the task of predicting users’ preferences based on taste information from other users [Ma 09] 56

57 Recommender Systems 57 Users with similar profiles are close to each other Preference of an item I Preference on I Unknown

58 Recommender System Algorithms n Input  tuples for items that the user has explicitly rated  Typically represented as a user-item matrix n Output  tuples for items that the user has not rated  Can be thought of as filling in the missing entries of the user-item matrix n Most algorithms infer missing ratings based on the ratings of similar users 58


Download ppt "Chapter 10 Social Search. n “Social search describes search acts that make use of social interactions with others. These interactions may be explicit."

Similar presentations


Ads by Google