Download presentation

Presentation is loading. Please wait.

Published byEliezer Lank Modified over 2 years ago

1
1 Maximizing Communication for Spam Fighting Oded Schwartz CS294, Lecture #19 Fall, 2011 Communication-Avoiding Algorithms www.cs.berkeley.edu/~odedsc/CS294 Based on: Cynthia Dwork, Andrew Goldberg, Moni Naor. On Memory-Bound Functions for Fighting Spam. Cynthia Dwork, Moni Naor, Hoeteck Wee. Pebbling and Proofs of Work Many slides borrowed from: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/spam.ppt

2
2 Motivation Spams in: Email, IM, Forums, SMS, … What is spam? (and who is a spammer?) From the Messaging Anti-Abuse Working Group (MAAWG) 2010 report: “The definition of spam can vary greatly from country to country and as used in local legislation” (e.g., opt-in vs. opt-out). “The percentage of email identified as abusive has oscillated (mid 2009 to end of 2010) between 88% and 91%” California business and professions code, 2007: US spam cost estimate (direct and indirect): $13 billion (productivity, manpower, software,…)

3
3 Techniques for (email) spam-fighting Email filtering White lists, black lists, greylisting Text-based Trainable filters Human assisted filters … By receiver By server

4
4 Techniques for (email) spam-fighting Making the sender pay $ The system must make sending spams obtrusively unproductive for the spammer, but should not prevent legitimate users from sending their messages. Micropayments: Pay (a small amount) for each email you send.

5
5 Techniques for (email) spam-fighting Pay with human attention – Reverse Turing test. Tasks that are: Easy for humans Hard for machines Examples Gender recognition Facial expression Find body parts Decide nudity Naïve drawing understanding Handwriting understanding Filling in words Disambiguation Captcha: Completely Automated Public Turing test to tell Computers and Humans Apart"

6
6 Techniques for (email) spam-fighting Pay with computer time: Proof of Work (POW). Computation Dwork Naor 92 Back 97 Abadi Burrows Manasse Wobber 03 Communication Dwork, Goldberg, Naor 03 Dwork, Naor, Wee 05

7
7 Proof of Work If I don’t know you, prove you spent significant computational resources, just for me, and just for this message. And make it easy for me to verify.

8
8 Proof of Work Variants: Interactive: Challenge-response protocols Would you like to talk with me? Why is a raven like a writing-desk? What's the answer? I haven't the slightest idea Hmmm. Only if you answer my riddle. A. Request B. Challenge A. Respond B. Approve/reject Is interactive protocol good enough?

9
9 Proof of Work Variants: One round protocols: solution-verification The challenge is self-imposed before a solution is sought by the sender, and the receiver checks both the problem choice and the found solution. A. Compute A. Solve A. Send B. Verify automated for the user non-interactive, single-pass no need for third party or payment infrastructure Would you like to talk with me? Here is a token of my sincerity

10
Choosing the function f Message m, Sender S, Receiver R and Date and time d Hard to compute ; f(m,S,R,d) lots of work for the sender cannot be amortized Easy to check “z = f(m,S,R,d)” - little work for receiver Parameterized to scale with Moore's Law easy to exponentially increase computational cost, while barely increasing checking cost Example: computing a square root mod a prime vs. verifying it; x 2 =y mod P

11
Worst case vs. Average case Message m, Sender S, Receiver R and Date and time d Hard to compute ; f(m,S,R,d) We want f to be hard on average: cannot be amortized (over the inputs) It is OK if some inputs are easy (a spammer can send few emails) It is not enough if f is hard in the worst case but not on average. Otherwise, the spammer would not be able to send everything she wants, but still would be able to send many email at a low average cost.

12
Basing hardness: resource Goal: design a proof of work function which requires a large number of flops / memory access/ network access CPU (flops count) Back 97, Abadi Burrows Manasse Wobber 03 High variance, lower cost. Memory access Dwork Naor 92, Abadi Burrows Manasse Wobber 03, Dwork Goldberg Naor 03, Dwork Naor Wee 05 Lower variance, higher cost. Network access Abliz, Znati, 2009.

13
Basing hardness: resource Goal: design a proof of work function which requires a large number of flops / memory access/ network access We should assume that a spammer has better resources. The exponential gaps of annual hardware improvements doesn’t matter, as long as we have a scaling parameter of effort e for the function. Annual hardware improvements: Exponential growth with large gaps [Graham, Snir,Patterson, 04], [Fuller, Millett, 10] CPU (msec/flop) DRAMNetwork 59% Bandwidth (msec/word) 23%26% Latency (msec/message) 5%15%

14
Hardness based on: Goal: design a proof of work function which requires a large number of flops / memory access/ network access Information theoretic bound: Example: “Where is Waldo?” atpaajlssgijgnfrzkbvcwvjzbsubwsrder xfdybhrdrmsabmvrsyszcbgkvnhuppd ponqawgrouhpycsstuklwfskbmbnvbs fhydoazsvhywsuhzqagwaldoanftqlbdl oxhypfmovnbcmannlfytrvjsbwIgjhrdke igjbmtibingojgnhxiwotphjcjsuqjdnfjtjns Where the truly random input is too large to fit in local/fast memory. Proved time/space separation Complexity assumption Example: P ≠ NP Cryptographic assumption Example: Discrete log A problem for which there is a fast verification scheme, but no sufficiently efficient algorithm is known Example: matrix multiplication vs. matrix verification

15
memory-bound model MAIN MEMORY large but slow CACHE small but fast USER MAIN MEMORY may be very very large may exploit locality CACHE cache size at most ½ user’s main memory SPAMMER

16
memory-bound model MAIN MEMORY large but slow CACHE small but fast USER MAIN MEMORY may be very very large CACHE cache size at most ½ user’s main memory SPAMMER 1. charge accesses to main memory must avoid exploitation of locality 2. computation is free except for hash function calls watch out for low-space crypto attacks

17
Example of PoW: General scheme Consider a huge graph G, so large, that a vertex name barely fits into the fast memory. The graph is implicit, i.e., is its edges are defined by functions on the vertices names. The objective function f is a path p in G with certain (rare) property that depend on (m,S,R,d). Searching for p should be hard: its running time should depend on the number of paths in G. Verifying that a given path p has the property should depend on the length of the path |p|.

18
L P Collection P of paths. Depends on ( m,S,R,d ) Successful Path

19
Abstracted Algorithm Sender and Receiver share large random Table T. To send message m, Sender S, Receiver R date/time d, Repeat trial for k = 1,2, … until success : Current state specified by A auxiliary table Thread defined by (m,S,R,d,k) Initialization: A = H 0 (m,S,R,d,k) Main Loop: Walk for L steps (L=path length): c = H 1 (A) A = H 2 (A,T[c]) Success: if last e bit of H 3 (A) = 00…0 Attach to (m,S,R,d) the successful trial number k and H 3 (A) Verification: straightforward given (m, S, R, d, k, H 3 (A))

20
Animated Algorithm – a Single Step in the Loop C = H 1 (A) A C A = H 2 (A,T[C]) T H1H1 H2H2 T[C]

21
Full Specification E = (expected) factor by which computation cost exceeds verification = expected number of trials = 2 e If H 3 behaves as a random function L = length of walk Want, say, ELt = 10 seconds, where t = memory latency = 0.2 sec Reasonable choices: E = 24,000, L = 2048 Also need: How large is A ? A should not be very small… 1. Initialize: A = H 0 (m,S,R,d,k) 2. Main Loop: Walk for L steps: c H 1 (A) A H 2 (A,T[c]) 3. Success if H 3 (A) = 0 log E 4. Trial repeated for k = 1,2, … 5. Proof = (m,S,R,d,k,H 3 (A)) abstract algorithm

22
Path-following approach [Dwork-Goldberg-Naor Crypto 03] [Remarks] 1. lower bound holds for spammer maximizing throughput across any collection of messages and recipients 2. model idealized hash functions using random oracles 3. relies on information-theoretic unpredictability of T [Theorem] fix any spammer: whose cache size is smaller than |T|/2 assuming T is truly random assuming H 0,…,H 3 are idealized hash functions the amortized number of memory accesses per successful message is (2 e L).

23
Using a succinct table [DNW 05] GOAL use a table T with a succinct description easy distribution of software (new users) fast updates (over slow connections) PROBLEM lose information theoretic unpredictability spammer can exploit succinct description to avoid memory accesses IDEA generate T using a memory-bound process Use time-space trade-offs for pebbling Studied extensively in 1970s User builds the table T once and for all

24
Choosing the H ’ s A “theoretical” approach: idealized random functions Provide a formal analysis showing that the amortized number of memory access is high A concrete approach inspired by RC4 stream cipher Very Efficient: a few cycles per step Don’t have time inside inner loop to compute complex function A is not small – changes gradually

25
RC4 - Rivest Cipher 4 Generates a pseudorandom stream of bits Used in: Secure Sockets Layer (SSL) Wired Equivalent Privacy (WEP) Wi-Fi Protected Access (WPA) Secure Shell (SSH) Many other applications… Biased Outputs, allows several types of attacks, e.g., 104-bit RC4 used in 128-bit WEP in under a minute

26
Pebbling a graph GIVEN a directed acyclic graph RULES: inputs: a pebble can be placed on an input node at any time a pebble can be placed on any non-input vertex if all immediate parent nodes have pebbles pebbles may be removed at any time GOAL find a strategy to pebble all the outputs while using few pebbles and few moves INPUT OUTPUT

27
INPUTOUTPUT 1. input node i labeled H 4 (i) 2. non-input node i labeled H 4 (i, labels of parent nodes) 3. entries of T = labels of output nodes L i = H 4 (i, L j, L k ) LjLj LkLk Succinctly generating T GIVEN a directed acyclic graph constant in-degree OBSERVATION good pebbling strategy good spammer strategy

28
Converting spammer strategy to a pebbling EX POST FACTO PEBBLING computed by offline inspection of spammer strategy 1.PLACING A PEBBLE place a pebble on node i if H 4 used to compute L i = H 4 (i, L j, L k ), and L j, L k are the correct labels 2.INITIAL PEBBLES place initial pebble on node j if H 4 applied with L j as argument, and L j not computed via H 4 3.REMOVING A PEBBLE remove a pebble as soon as it’s not needed anymore IDEA limit # of pebbles used by the spammer as a function of its cache size and # of bits it brings from memory computing a label using hash function lower bound on # moves lower bound on # hash function calls using cache + memory fetches lower bound on # pebbles lower bound on # memory accesses

29
INPUTOUTPUT Succinctly generating T Need a graph that is hard to pebble on average, i.e.,: Every large set of outputs requires many initial pebbles (correspond to reading from fast/slow memory) Example: Superconcentrators

30
Open problems WEAKER ASSUMPTIONS Unconditional result? Use the red-blue pebbling model? Use the edge expansion approach? Other no-amortization proofs, for solving many instances in parallel/serial?

31
31 Maximizing Communication for Spam fighting Oded Schwartz CS294, Lecture #19 Fall, 2011 Communication-Avoiding Algorithms www.cs.berkeley.edu/~odedsc/CS294 Based on: Cynthia Dwork, Andrew Goldberg, Moni Naor. On Memory-Bound Functions for Fighting Spam. Cynthia Dwork, Moni Naor, Hoeteck Wee. Pebbling and Proofs of Work Many slides borrowed from: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/spam.ppt

32
32 Let G = (V,E) be a d -regular graph A is the normalized adjacency matrix, with eigenvalues 1 ≥ 2 ≥ … ≥ n 1 - max { 2, | n |} Thm: [Alon-Milman84, Dodziuk84, Alon86] Expansion (3rd approach) [Ballard, Demmel, Holtz, S. 2011b], in the spirit of [Hong & Kung 81] For small sets:

33
RSRS WSWS S 33 The Computation Directed Acyclic Graph Expansion (3rd approach) Communication-cost is Graph-expansion Input / Output Intermediate value Dependency V S : subset of computation R S : reads W S : writes

34
Proof highlights Use of idealized hash function implies: At any point in time A is incompressible The average number of oracle calls per success is (EL). We can follow the progress of the algorithm Cast the problem as that of asymmetric communication complexity between memory and cache Only the cache has access to the functions H 1 and H 2 Cache Memory

35
Why Random Oracles? Random Oracles 101 Can measure progress: know which oracle calls must be made can see when they occur. First occurrence of each such call is a progress call: 1 2 3 1 3 2 3 4… 1. Initialize: A = H 0 (m,S,R,d,k) 2. Main Loop: Walk for L steps: c H 1 (A) A H 2 (A,T[c]) 3. Success if H 3 (A) = 0 log E 4. Trial repeated for k = 1,2, … 5. Proof = (m,S,R,d,k,H 3 (A)) abstract algorithm

36
What do we know about pebbling Any graph can be pebbled using O(N/log N) pebbles. [Valiant] There are graphs requiring (N/log N) pebbles [PTC] Any graph of depth d can be pebbled using O(d) pebbles Constant degree Tight tradeoffs: some shallow graphs requires many (super poly) steps to pebble with a few pebbles [LT] Some results about pebbling outputs hold even when possible to put the available pebbles in any initial configuration, e.g., in the Hong & Kung lower bound for FFT

37
A theory of moderately hard function? Key idea in cryptography: use the computational infeasibility of problems in order to obtain security. For many applications moderate hardness is needed current applications: abuse prevention, fairness, few round zero-knowledge FURTHER WORK develop a theory of moderate hard functions

38
Open problems: moderately hard functions Unifying Assumption In the intractable world: one-way function necessary and sufficient for many tasks Is there a similar primitive when moderate hardness is needed? Precise model Details of the computational model may matter, unifying it? Hardness Amplification Start with a somewhat hard problem and turn it into one that is harder. Hardness vs. Randomness Can we turn moderate hardness into and moderate pseudorandomness? Following standard transformation is not necessarily applicable here Evidence for non-amortization It possible to demonstrate that if a certain problem is not resilient to amortization, then a single instance can be solved much more quickly?

39
Open problems: moderately hard functions Immunity to Parallel Attacks Important for timed-commitments For the power function was used, is there a good argument to show immunity against parallel attacks? Is it possible to reduce worst-case to average case: find a random self reduction. In the intractable world it is known that there are limitations on random self reductions from NP-Complete problems Is it possible to randomly reduce a P-Complete problem to itself? is it possible to use linear programming or lattice basis reduction for such purposes? New Candidates for Moderately Hard Functions

40
40 Maximizing Communication for Spam fighting Oded Schwartz CS294, Lecture #19 Fall, 2011 Communication-Avoiding Algorithms www.cs.berkeley.edu/~odedsc/CS294 Based on: Cynthia Dwork, Andrew Goldberg, Moni Naor. On Memory-Bound Functions for Fighting Spam. Cynthia Dwork, Moni Naor, Hoeteck Wee. Pebbling and Proofs of Work Many slides borrowed from: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/spam.ppt

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google