Presentation is loading. Please wait.

Presentation is loading. Please wait.

4/17/2008Semantic Communication1 Universal Semantic Communication Madhu Sudan MIT CSAIL Joint work with Brendan Juba (MIT CSAIL).

Similar presentations


Presentation on theme: "4/17/2008Semantic Communication1 Universal Semantic Communication Madhu Sudan MIT CSAIL Joint work with Brendan Juba (MIT CSAIL)."— Presentation transcript:

1 4/17/2008Semantic Communication1 Universal Semantic Communication Madhu Sudan MIT CSAIL Joint work with Brendan Juba (MIT CSAIL).

2 4/17/2008Semantic Communication2 An fantasy setting (SETI) 010010101010001111001000 Bob What should Bob ’ s response be? If there are further messages, are they reacting to him? Is there an intelligent Alien (Alice) out there? No common language! Is meaningful communication possible? Alice

3 4/17/2008Semantic Communication3 Pioneer’s face plate Why did they put this image? What would you put? What are the assumptions and implications?

4 4/17/2008Semantic Communication4 Motivation: Better Computing Networked computers use common languages: Networked computers use common languages: Interaction between computers (getting your computer onto internet). Interaction between computers (getting your computer onto internet). Interaction between pieces of software. Interaction between pieces of software. Interaction between software, data and devices. Interaction between software, data and devices. Getting two computing environments to “talk” to each other is getting problematic: Getting two computing environments to “talk” to each other is getting problematic: time consuming, unreliable, insecure. time consuming, unreliable, insecure. Can we communicate more like humans do? Can we communicate more like humans do?

5 4/17/2008Semantic Communication5 Classical Paradigm for interaction Object 1 Object 2 Designer

6 4/17/2008Semantic Communication6 Object 2 New paradigm Object 1 Designer

7 4/17/2008Semantic Communication7 Robust interfaces Want one interface for all “Object 2”s. Want one interface for all “Object 2”s. Can such an interface exist? Can such an interface exist? What properties should such an interface exhibit? What properties should such an interface exhibit? Our thesis: Sufficient (for Object 1) to count on intelligence (of Object 2). Our thesis: Sufficient (for Object 1) to count on intelligence (of Object 2). But how to detect this intelligence?Puts us back in the “Alice and Bob” setting. But how to detect this intelligence?Puts us back in the “Alice and Bob” setting.

8 4/17/2008Semantic Communication8 Goal of this talk Definitional issues and a definition: Definitional issues and a definition: What is successful communication? What is successful communication? What is intelligence? cooperation? What is intelligence? cooperation? Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Theorem: “If Alice and Bob are intelligent and cooperative, then communication is feasible” (in one setting) Proof ideas: Proof ideas: Suggest: Suggest: Protocols, Phenomena … Protocols, Phenomena … Methods for proving/verifying intelligence Methods for proving/verifying intelligence

9 4/17/2008Semantic Communication9 What has this to do with computation? In general: Subtle issues related to “human” intelligence/interaction are within scope of computational complexity. E.g., In general: Subtle issues related to “human” intelligence/interaction are within scope of computational complexity. E.g., Proofs? Proofs? Easy vs. Hard? Easy vs. Hard? (Pseudo)Random? (Pseudo)Random? Secrecy? Secrecy? Knowledge? Knowledge? Trust? Trust? Privacy? Privacy? This talk: What is “understanding”? This talk: What is “understanding”?

10 4/17/2008Semantic Communication10 A first attempt at a definition Alice and Bob are “universal computers” (aka programming languages) Alice and Bob are “universal computers” (aka programming languages) Have no idea what the other’s language is! Have no idea what the other’s language is! Can they learn each other’s language? Can they learn each other’s language? Good News: Language learning is finite. Can enumerate to find translator. Good News: Language learning is finite. Can enumerate to find translator. Bad News: No third party to give finite string! Bad News: No third party to give finite string! Enumerate? Can’t tell right/wrong  Enumerate? Can’t tell right/wrong 

11 4/17/2008Semantic Communication11 Communication & Goals Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Communication (with/without common language) ought to have a “Goal”. Communication (with/without common language) ought to have a “Goal”. Before we ask how to improve communication, we should ask why we communicate? Before we ask how to improve communication, we should ask why we communicate? “Communication is not an end in itself, “Communication is not an end in itself, but a means to achieving a Goal” but a means to achieving a Goal”

12 4/17/2008Semantic Communication12 Part I: A Computational Goal

13 4/17/2008Semantic Communication13 Computational Goal for Bob Bob wants to solve hard computational problem: Bob wants to solve hard computational problem: Decide membership in set S. Decide membership in set S. Can Alice help him? Can Alice help him? What kind of sets S? E.g., What kind of sets S? E.g., S = {set of programs P that are not viruses}. S = {set of programs P that are not viruses}. S = {non-spam email} S = {non-spam email} S = {winning configurations in Chess} S = {winning configurations in Chess} S = {(A,B) | A has a factor less than B} S = {(A,B) | A has a factor less than B}

14 4/17/2008Semantic Communication14 Review of Complexity Classes P (BPP) – Solvable in (randomized) polynomial time (Bob can solve this without Alice’s help). P (BPP) – Solvable in (randomized) polynomial time (Bob can solve this without Alice’s help). NP – Problems where solutions can be verified in polynomial time (contains factoring). NP – Problems where solutions can be verified in polynomial time (contains factoring). PSPACE – Problems solvable in polynomial space (quite infeasible for Bob to solve on his own). PSPACE – Problems solvable in polynomial space (quite infeasible for Bob to solve on his own). Computable – Problems solvable in finite time. (Includes all the above.) Computable – Problems solvable in finite time. (Includes all the above.) Uncomputable (Virus detection. Spam filtering.) Uncomputable (Virus detection. Spam filtering.) Which problems can you solve with (alien) help?

15 4/17/2008Semantic Communication15 Setup Bob Alice R Ã $$$ q 1 a 1 a k q k Which class of sets? x 2 S ? f ( x ; R ; a 1 ;:::; a k ) = 1 ? H ope f u ll yx 2 S, f ( ¢¢¢ ) = 1

16 4/17/2008Semantic Communication16 Contrast with Interactive Proofs Similarity: Interaction between Alice and Bob. Similarity: Interaction between Alice and Bob. Difference: In IP, Bob does not trust Alice. Difference: In IP, Bob does not trust Alice. (In our case Bob does not understand Alice). (In our case Bob does not understand Alice). Famed Theorem: IP = PSPACE [LFKN, Shamir]. Famed Theorem: IP = PSPACE [LFKN, Shamir]. Membership in PSPACE solvable S can be proved interactively to a probabilistic Bob. Membership in PSPACE solvable S can be proved interactively to a probabilistic Bob. Needs a PSPACE-complete prover Alice. Needs a PSPACE-complete prover Alice.

17 4/17/2008Semantic Communication17 Intelligence & Cooperation? For Bob to have a non-trivial interaction, Alice must be: For Bob to have a non-trivial interaction, Alice must be: Intelligent: Capable of deciding if x in S. Intelligent: Capable of deciding if x in S. Cooperative: Must communicate this to Bob. Cooperative: Must communicate this to Bob. Modelling Alice: Maps “(state of mind,external input)” to “(new state of mind, output)”. Modelling Alice: Maps “(state of mind,external input)” to “(new state of mind, output)”. Formally: Formally:

18 4/17/2008Semantic Communication18 Successful universal communication Bob should be able to talk to any S-helpful Alice and decide S. Bob should be able to talk to any S-helpful Alice and decide S. Formally, Formally, P p t B i s S -un i versa l i ff oreveryx 2 f 0 ; 1 g ¤ A i sno t S - h e l p f u l ) N o t h i ng !! ¡ A i s S - h e l p f u l ) [ A $ B ( x )] = 1 i ® x 2 S ( w h p ). Or should it be … A i sno t S - h e l p f u l ) [ A $ B ( x )] = 1 i mp l i esx 2 S.

19 4/17/2008Semantic Communication19 Main Theorem - - In English: In English: If S is moderately stronger than what Bob can do on his own, then attempting to solve S leads to non-trivial (useful) conversation. If S is moderately stronger than what Bob can do on his own, then attempting to solve S leads to non-trivial (useful) conversation. If S too strong, then leads to ambiguity. If S too strong, then leads to ambiguity. Uses IP=PSPACE Uses IP=PSPACE I f t h ereex i s t san S -un i versa l B o b t h en S i s i n PSPACE. I f S i s PSPACE -comp l e t e ( a k a C h ess ), t h en t h ereex i s t san S -un i versa l B o b. ( G enera l i zes t oanyc h ec k a bl ese t S. )

20 4/17/2008Semantic Communication20 Few words about the proof Positive result: Enumeration + Interactive Proofs Positive result: Enumeration + Interactive Proofs Alice Prover Bob Interpreter P roo f wor k s ) x 2 S ; D oesn t wor k ) G uesswrong. A l i ce S - h e l p f u l ) I n t erpre t erex i s t s ! G uess: I n t erpre t er;x 2 S ?

21 4/17/2008Semantic Communication21 Proof of Negative Result L not in PSPACE implies Bob makes mistakes. L not in PSPACE implies Bob makes mistakes. Suppose Alice answers every question so as to minimize the conversation length. Suppose Alice answers every question so as to minimize the conversation length. (Reasonable effect of misunderstanding). (Reasonable effect of misunderstanding). Conversation comes to end quickly. Conversation comes to end quickly. Bob has to decide. Bob has to decide. Conversation + Decision simulatable in PSPACE (since Alice’s strategy can be computed in PSPACE). Conversation + Decision simulatable in PSPACE (since Alice’s strategy can be computed in PSPACE). Bob must be wrong if L is not in PSPACE. Bob must be wrong if L is not in PSPACE. Warning: Only leads to finitely many mistakes. Warning: Only leads to finitely many mistakes.

22 4/17/2008Semantic Communication22 Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. No … the result proves “misunderstanding is equal to mistrust”. Was not a priori clear. No … the result proves “misunderstanding is equal to mistrust”. Was not a priori clear. Even this is true only in some contexts. Even this is true only in some contexts.

23 4/17/2008Semantic Communication23 Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! A priori – not clear why he should have been able to decide right/wrong. A priori – not clear why he should have been able to decide right/wrong. Polynomial time learning not possible in our model of “helpful Alice”. Polynomial time learning not possible in our model of “helpful Alice”. Better definitions can be explored – future work. Better definitions can be explored – future work.

24 4/17/2008Semantic Communication24 Potential Criticisms of Main Theorem This is just rephrasing IP=PSPACE. This is just rephrasing IP=PSPACE. Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Bob is too slow: Takes exponential time in length of Alice, even in his own description of her! Alice has to be infinitely/PSPACE powerful … Alice has to be infinitely/PSPACE powerful … But not as powerful as that Anti-Virus Program! But not as powerful as that Anti-Virus Program! Wait for Part II Wait for Part II

25 4/17/2008Semantic Communication25 Part II: Intellectual Curiosity

26 4/17/2008Semantic Communication26 Setting: Bob more powerful than Alice What should Bob’s Goal be? What should Bob’s Goal be? Can’t use Alice to solve problems that are hard for him. Can’t use Alice to solve problems that are hard for him. Can pose problems and see if she can solve them. E.g., Teacher-student interactions. Can pose problems and see if she can solve them. E.g., Teacher-student interactions. But how does he verify “non-triviality”? But how does he verify “non-triviality”? What is “non-trivial”? Must distinguish … What is “non-trivial”? Must distinguish … Alice Bob Interpreter Scene 1 Scene 2

27 4/17/2008Semantic Communication27 Setting: Bob more powerful than Alice Concretely: Concretely: Bob capable of TIME(n 10 ). Bob capable of TIME(n 10 ). Alice capable of TIME(n 3 ) or nothing. Alice capable of TIME(n 3 ) or nothing. Can Bob distinguish the two settings ? Can Bob distinguish the two settings ? Answer: Yes, if Translate(Alice,Bob) computable in TIME(n 2 ). Answer: Yes, if Translate(Alice,Bob) computable in TIME(n 2 ). Bob poses TIME(n 3 ) time problems to Alice and enumerates all TIME(n 2 ) interpreters. Bob poses TIME(n 3 ) time problems to Alice and enumerates all TIME(n 2 ) interpreters. Moral: Language (translation) should be simpler than problems being discussed. Moral: Language (translation) should be simpler than problems being discussed.

28 4/17/2008Semantic Communication28 Part III: Concluding thoughts

29 4/17/2008Semantic Communication29 Is this language learning? End result promises no language learning: Merely that Bob solves his problem. End result promises no language learning: Merely that Bob solves his problem. In the process, however, Bob learns Interpreter! In the process, however, Bob learns Interpreter! But this may not be the right Interpreter. But this may not be the right Interpreter. All this is Good! All this is Good! No need to distinguish indistinguishables! No need to distinguish indistinguishables!

30 4/17/2008Semantic Communication30 Goals of Communication Largely unexplored (at least explicitly) Largely unexplored (at least explicitly) Main categories Main categories Remote Control: Remote Control: Laptop wants to print on printer! Laptop wants to print on printer! Buy something on Amazon Buy something on Amazon Intellectual Curiosity: Intellectual Curiosity: Learning/Teaching Learning/Teaching Listening to music, watching movies Listening to music, watching movies Coming to this talk Coming to this talk Searching for alien intelligence Searching for alien intelligence May involve common environment/context. May involve common environment/context.

31 4/17/2008Semantic Communication31 Extension to generic goals Generic (implementation of) Goal: Given by: Generic (implementation of) Goal: Given by: Strategy for Bob. Strategy for Bob. Class of Interpreters. Class of Interpreters. Boolean function G of Boolean function G of Private input, randomness Private input, randomness Interaction with Alice through Interpreter Interaction with Alice through Interpreter Environment (Altered by actions of Alice) Environment (Altered by actions of Alice) Should be Should be Verifiable: G should be easily computable. Verifiable: G should be easily computable. Complete: Achievable w. common language (for some Alice, independent of history). Complete: Achievable w. common language (for some Alice, independent of history). Non-trivial: Not achievable without Alice. Non-trivial: Not achievable without Alice.

32 4/17/2008Semantic Communication32 Generic Verifiable Goal Alice Strategy Interpreter x,R Verifiable Goal = (Strategy, Class of Interpreters, V) V(x,R, )

33 4/17/2008Semantic Communication33 Generic Goals Can define Goal-helpful; Goal-universal; and prove existence of Goal-universal Interpreter for all Goals. Can define Goal-helpful; Goal-universal; and prove existence of Goal-universal Interpreter for all Goals. Claim: Captures all communication Claim: Captures all communication (unless you plan to accept random strings). Modelling natural goals is still interesting. E.g. Modelling natural goals is still interesting. E.g. Printer Problem: Bob(x): Alice should say x. Printer Problem: Bob(x): Alice should say x. Intellectual Curiosity: Bob: Send me a “theorem” I can’t prove, and a “proof”. Intellectual Curiosity: Bob: Send me a “theorem” I can’t prove, and a “proof”. Proof of Intelligence (computational power): Proof of Intelligence (computational power): Bob: given f, x; compute f(x). Bob: given f, x; compute f(x). Conclusion: (Goals of) Communication can be achieved w/o common language Conclusion: (Goals of) Communication can be achieved w/o common language

34 4/17/2008Semantic Communication34 Role of common language? If common language is not needed (as we claim), then why do intelligent beings like it? If common language is not needed (as we claim), then why do intelligent beings like it? Our belief: To gain efficiency. Our belief: To gain efficiency. - Reduce # bits of communication - # rounds of communication Topic for further study: Topic for further study: What efficiency measure does language optimize? What efficiency measure does language optimize? Is this difference asymptotically significant? Is this difference asymptotically significant?

35 4/17/2008Semantic Communication35 Further work Exponential time learning (enumerating Interpreters) Exponential time learning (enumerating Interpreters) What is a reasonable restriction on languages? What is a reasonable restriction on languages? What is the role of language in communication? What is the role of language in communication? What are other goals of communication? What are other goals of communication? What is intelligence? What is intelligence? Paper (Part I) available from ECCC

36 4/17/2008Semantic Communication36 Thank You!

37 4/17/2008Semantic Communication37 Example Symmetric Alice and Bob (computationally): Symmetric Alice and Bob (computationally): Bob’s Goal: Bob’s Goal: Get an Interpreter in TIME(n 2 ), to solve TIME(n 3 ) problems by talking to Alice? Get an Interpreter in TIME(n 2 ), to solve TIME(n 3 ) problems by talking to Alice? Verifiable: Bob can generate such problems, with solutions in TIME(n 3 ). Verifiable: Bob can generate such problems, with solutions in TIME(n 3 ). Complete: Alice can solve this problem. Complete: Alice can solve this problem. Non-trivial: Interpreter can not solve problem on its own. Non-trivial: Interpreter can not solve problem on its own.

38 4/17/2008Semantic Communication38 Summary Communication should strive to satisfy one’s goals. Communication should strive to satisfy one’s goals. If one does this “understanding” follows. If one does this “understanding” follows. Can enable understanding by dialog: Can enable understanding by dialog: Laptop -> Printer: Print Laptop -> Printer: Print Printer: But first tell me Printer: But first tell me “If there are three oranges and you take away two, how many will you have?” “If there are three oranges and you take away two, how many will you have?” Laptop: One! Laptop: One! Printer: Sorry, we don’t understand each other! Printer: Sorry, we don’t understand each other! Laptop: Oh wait, I got it, the answer is “Two”. Laptop: Oh wait, I got it, the answer is “Two”. Printer: All right … printing. Printer: All right … printing.

39 4/17/2008Semantic Communication39 Few words about the proof Positive result: Enumeration + Interactive Proofs Positive result: Enumeration + Interactive Proofs ¡ B o b : V er i ¯ esx 2 L b ys i mu l a t i ng IP ver i ¯ er. ¡ B u t nee d s t oas k t h e IPP rovermanyques t i ons ¡ T rans l a t es i n t omanyo t h erques t i onsy 2 L ¡ T oge t answers: B o b guesses B o b 0 ± S i mu l a t es i n t erac t i on b e t ween A l i cean d B o b 0. I f proo f i sconv i nc i ngx 2 L ! I f x 2 L an d B o b 0 i scorrec t, ge t aconv i nc i ngproo f.

40 4/17/2008Semantic Communication40 How to model curiosity? How can Alice create non-trivial conversations? (when she is not more powerful than Bob) How can Alice create non-trivial conversations? (when she is not more powerful than Bob) Non-triviality of conversation depends on the ability to jointly solve a problem that Bob could not solve on his own. Non-triviality of conversation depends on the ability to jointly solve a problem that Bob could not solve on his own. But now Alice can’t help either! But now Alice can’t help either! We are stuck? We are stuck?

41 4/17/2008Semantic Communication41 Communication & Goals Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Indistinguishability of Right/Wrong: Consequence of “communication without goal”. Communication (with/without common language) ought to have a “Goal”. Communication (with/without common language) ought to have a “Goal”. Bob’s Goal: Bob’s Goal: Verifiable: Easily computable function of interaction; Verifiable: Easily computable function of interaction; Complete: Achievable with common language. Complete: Achievable with common language. Non-trivial: Not achievable without Alice. Non-trivial: Not achievable without Alice.

42 4/17/2008Semantic Communication42 Cryptography to the rescue Alice can generate hard problems to solve, while knowing the answer. Alice can generate hard problems to solve, while knowing the answer. E.g. “I can factor N”; E.g. “I can factor N”; Later “P * Q = N” Later “P * Q = N” If B’ is intellectually curious, then he can try to factor N first on his own … he will (presumably) fail. Then Alice’s second sentence will be a “revelation” … If B’ is intellectually curious, then he can try to factor N first on his own … he will (presumably) fail. Then Alice’s second sentence will be a “revelation” … Non-triviality: Bob verified that none of the algorithms known to him, convert his knowledge into factors of N. Non-triviality: Bob verified that none of the algorithms known to him, convert his knowledge into factors of N.

43 4/17/2008Semantic Communication43 More generally Alice can send Bob a Goal function. Alice can send Bob a Goal function. Bob can try to find conversations satisfying the Goal. Bob can try to find conversations satisfying the Goal. If he fails (once he fails), Alice can produce conversations that satisfy the Goal. If he fails (once he fails), Alice can produce conversations that satisfy the Goal. Universal? Universal?

44 4/17/2008Semantic Communication44 Part III: Pioneer Faceplate? Non-interactive proofs of intelligence?

45 4/17/2008Semantic Communication45 Compression is universal When Bob receives Alice’s string, he should try to look for a pattern (or “compress” the string). When Bob receives Alice’s string, he should try to look for a pattern (or “compress” the string). Universal efficient compression algorithm: Universal efficient compression algorithm: Input(X); Input(X); Enumerate efficient pairs (C(), D()); Enumerate efficient pairs (C(), D()); If D(C(X)) ≠ X then pair is invalid. If D(C(X)) ≠ X then pair is invalid. Among valid pairs, output the pair with smallest |C(X)|. Among valid pairs, output the pair with smallest |C(X)|.

46 4/17/2008Semantic Communication46 Compression-based Communication As Alice sends her string to Bob, Bob tries to compress it. As Alice sends her string to Bob, Bob tries to compress it. After.9 n steps After.9 n steps X Bob C(X) After n steps X’X’ C(X,X ’ ) Such phenomena can occur! Surely suggest intelligence/comprehension?

47 4/17/2008Semantic Communication47 Discussion of result Alice needs to solve PSPACE. Realistic? Alice needs to solve PSPACE. Realistic? What about virus-detection? Spam-filters? What about virus-detection? Spam-filters? These solve undecidable problems!!! These solve undecidable problems!!! PSPACE-setting: natural, clean setting PSPACE-setting: natural, clean setting Arises from the proof. Arises from the proof. Other languages work, (SZK, NP coNP). Other languages work, (SZK, NP coNP). Learning B’ is taking exponential time! Learning B’ is taking exponential time! This is inevitable (minor theorem) … This is inevitable (minor theorem) … … unless, languages have structure. Future work. … unless, languages have structure. Future work. \

48 4/17/2008Semantic Communication48 Discussion of result Good news: Good news: If we take “self-verification” as an axiom, then meaningful learning is possible! If we take “self-verification” as an axiom, then meaningful learning is possible! Simple semantic principle … reasonable to assume that Alice (the alien) would have determined this as well, and so will use this to communicate with us (Bobs). Simple semantic principle … reasonable to assume that Alice (the alien) would have determined this as well, and so will use this to communicate with us (Bobs).


Download ppt "4/17/2008Semantic Communication1 Universal Semantic Communication Madhu Sudan MIT CSAIL Joint work with Brendan Juba (MIT CSAIL)."

Similar presentations


Ads by Google