Presentation is loading. Please wait.

Presentation is loading. Please wait.

Seminar on Sub-linear algorithms Prof. Ronitt Rubinfeld.

Similar presentations


Presentation on theme: "Seminar on Sub-linear algorithms Prof. Ronitt Rubinfeld."— Presentation transcript:

1 Seminar on Sub-linear algorithms Prof. Ronitt Rubinfeld

2 What are we here for?

3 Big data?

4 Really Big Data Impossible to access all of it Potentially accessible data is too enormous to be viewed by a single individual Once accessed, data can change

5 Approaches Ignore the problem Develop algorithms for dealing with such data

6 Small world phenomenon each “node’’ is a person “edge’’ between people that know each other

7 Small world property “connected” if every pair can reach each other “distance’’ between two people is the minimum number of edges to reach one from another “diameter’’ is the maximum distance between any pair

8 6 degrees of separation property In our language: diameter of the world population is 6

9 Does earth have the small world property? How can we know? data collection problem is immense unknown groups of people found on earth births/deaths

10 The Gold Standard linear time algorithms Inadequate…

11 What can we hope to do without viewing most of the data? Can’t answer “for all” or “there exists” and other “exactly” type statements: are all individuals connected by at most 6 degrees of separation? exactly how many individuals on earth are left-handed? Maybe can answer? is there a large group of individuals connected by at most 6 degrees of separation? is the average pairwise distances of a graph roughly 6? approximately how many individuals on earth are left-handed?

12 What can we hope to do without viewing most of the data? Must change our goals: for most interesting problems: algorithm must give approximate answer we know we can answer some questions... e.g., sampling to approximate average, median values

13 Sublinear time models: Random Access Queries Can access any word of input in one step Samples Can get sample of a distribution in one step, Alternatively, can only get random word of input in one step When computing functions depending on frequencies of data elements When data in random order p x 1,x 2,x 3,…

14 Sublinear space model: streaming Data flows by and once it goes by you can’t see it again you do have small space to work with and remember some things…

15 Part I: Query model

16 A. Standard approximation problems Estimate value of optimization problem after viewing only small portion of data Classically studied problems: Minimum spanning tree, vertex cover, max cut, positive linear program, edit distance, … some problems are “hard” some are “easy”…

17 Example Vertex Cover: What is the smallest set of nodes that “sees” each edge? What’s known? NP-complete (unlikely to have an efficient solution) Can approximate to within a multiplicative factor of 2 in linear time Can approximate to within a multiplicative factor of 2 and a bit of additional additive error in CONSTANT TIME on sparse graphs! [Nguyen Onak 2008][Onak Ron Rosen Rubinfeld 2012]

18 B. Property testing

19 Quickly distinguish inputs that have specific property from those that are far from having the property Benefits: natural question just as good when data constantly changing fast sanity check: rule out “bad” inputs (i.e., restaurant bills) when is expensive processing worthwhile? Machine learning: Model selection problem Main Goal: all inputs inputs with the property close to having property

20 Property testing algorithms: Does the input object have crucial properties? Clusterability, small diameter graph, close to a codeword, in alphabetical order,… Examples: Can test if the social network has 6 degrees of separation in CONSTANT TIME! [Parnas Ron] Can test if a large dataset is sorted in O(log n) time [Ergun Kannan Kumar Rubinfeld Viswanathan] Can test if data is well-clusterable in CONSTANT time [Parnas Ron Rubinfeld]

21 Property Testing Properties of any object, e.g., Functions Graphs Strings Matrices Codewords Model must specify representation of object and allowable queries notion of close/far, e.g., number of bits/words that need to be changed edit distance

22 A simple property tester

23 Sortedness of a sequence Given: list y 1 y 2... y n Question: is the list sorted? Clearly requires n steps – must look at each y i

24 Sortedness of a sequence Given: list y 1 y 2... y n Question: can we quickly test if the list close to sorted?

25 What do we mean by ``quick’’? query complexity measured in terms of list size n Our goal (if possible): Very small compared to n, will go for clog n

26 What do we mean by “close’’? Definition: a list of size n is  -close to sorted if can delete at most  n values to make it sorted. Otherwise,  -far. (  is given as input, e.g.,  =1/10) Sorted: 1 2 4 5 7 11 14 19 20 21 23 38 39 45 Close: 1 4 2 5 7 11 14 19 20 39 23 21 38 45 1 4 5 7 11 14 19 20 23 38 45 Far: 45 39 23 1 38 4 5 21 20 19 2 7 11 14 1 4 5 7 11 14

27 Requirements for algorithm: Pass sorted lists Fail lists that are  -far. Equivalently: if list likely to pass test, can change at most  fraction of list to make it sorted Probability of success > ¾ (can boost it arbitrarily high by repeating several times and outputting “fail” if ever see a “fail”, “pass” otherwise) Can test in O(1/  log n) time (and can’t do any better!) What if list not sorted, but not far?

28 An attempt: Proposed algorithm : Pick random i and test that y i ≤y i+1 Bad input type: 1,2,3,4,5,…n/4, 1,2,….n/4, 1,2,…n/4, 1,2,…,n/4 Difficult for this algorithm to find “breakpoint” But other tests work well… i yiyi

29 A second attempt: Proposed algorithm: Pick random i<j and test that y i ≤y j Bad input type: n/4 groups of 4 decreasing elements 4,3, 2, 1,8,7,6,5,12,11,10,9…,4k, 4k-1,4k-2,4k-3,… Largest monotone sequence is n/4 must pick i,j in same group to see problem need  (n 1/2 ) samples i yiyi

30 A minor simplification: Assume list is distinct (i.e. x i  x j ) Claim: this is not really easier Why? Can “virtually” append i to each x i x 1,x 2,…x n  (x 1,1), (x 2,2),…,(x n,n) e.g., 1,1,2,6,6  (1,1),(1,2),(2,3),(6,4),(6,5) Breaks ties without changing order

31 A test that works The test: Test O(1/  ) times: Pick random i Look at value of y i Do binary search for y i Does the binary search find any inconsistencies? If yes, FAIL Do we end up at location i? If not FAIL Pass if never failed Running time: O(  -1 log n) time Why does this work?

32 Behavior of the test: Define index i to be good if binary search for y i successful O(1/  log n) time test (restated): pick O(1/  ) i’s and pass if they are all good Correctness: If list is sorted, then all i’s good (uses distinctness)  test always passes If list likely to pass test, then at least (1-  )n i’s are good. Main observation: good elements form increasing sequence Proof: for i<j both good need to show y i < y j let k = least common ancestor of i,j Search for i went left of k and search for j went right of k  y i < y k <y j Thus list is  -close to monotone (delete <  n bad elements)

33 Part II: Sample model

34 Testing and learning distributions over BIG domains

35 Outbreak of diseases Similar patterns? Correlated with income level? More prevalent near large cities? Flu 2015 Flu 2016 Lots of zip codes!

36 Transactions of 20-30 yr oldsTransactions of 30-40 yr olds Testing closeness of two distributions: trend change? Lots of product types!

37 Play the lottery? Is it uniform? Is it independent?

38 Information in neural spike trails Each application of stimuli gives sample of signal (spike trail) Entropy of (discretized) signal indicates which neurons respond to stimuli Neural signals time [Strong, Koberle, de Ruyter van Steveninck, Bialek ’98]

39 High dimensional data Clusterable? Depends on all or few variables? Blood pressure, age, weight, pulse, temperature,… ICU, mortality, ventilator,… Vs.

40 What properties do your big distributions have?

41 Properties of distributions Given samples of a distribution, need to know, e.g., Is it uniform, normal, Poisson, Gaussian, Zipfian…? Correlations/independence? Entropy Number of distinct elements “Shape” (monotone, bimodal,…)

42 Key Question How many samples do you need in terms of domain size? Do you need to estimate the probabilities of each domain item? -- OR -- Can sample complexity be sublinear in size of the domain? Rules out standard statistical techniques

43 In many cases, the answer is YES! Well?

44 Assumptions on distributions? Is it Smooth? Monotone? Normal? Gaussian? Hopefully – NO ASSUMPTIONS!

45 Distributions on BIG domains

46 Independence, Monotonicity, Gaussian, normal, uniform,… Clusterable, Depends on few variables And lots and lots more! Many more!

47 To summarize: What can we do in sublinear time/space? Approximations – what do we mean by this? Optimization problems Decision problems – property testing Which distance metric? Which types of problems? Graphs and combinatorial objects Strings Functions Probability distributions Settings? Small time Small space Assistance of a prover?

48 Related course: Sublinear time algorithms 0368.4167 Monday 13-16 in Shenkar 104

49 My contact info: www.cs.tau.ac.il/~Rubinfeld/COURSES/F15sem/index.html Can also google “Ronitt” and follow links. My email: rubinfeld@post.tau.ac.ilrubinfeld@post.tau.ac.il

50 Giving a talk

51 Main issue

52 Giving an INTERESTING talk Good material Good slides Good delivery

53 Material Pick something you like! How many technical details? Make sure we learn something technical/conceptual

54 Slides At least 24 point font As few words as possible Split into more slides? Organization Use bullets, sub-bullets Use color wisely Write anything which should be remembered Repeat a definition on next slide? Use a board? Animations? Don’t overdo it! SPELL CHECK!!

55 Delivery Bad: Mumbling Monotone voice Repeat each sentence twice Filler words Stand in front of slides Don’t look at audience Important: Practice the talk – several times!

56 What now? Pick a paper, or two or three? Come talk to me!


Download ppt "Seminar on Sub-linear algorithms Prof. Ronitt Rubinfeld."

Similar presentations


Ads by Google