Presentation is loading. Please wait.

Presentation is loading. Please wait.

Selecting Suspicious Messages in Intercepted Communication David Skillicorn School of Computing, Queens University Research in Information Security, Kingston.

Similar presentations


Presentation on theme: "Selecting Suspicious Messages in Intercepted Communication David Skillicorn School of Computing, Queens University Research in Information Security, Kingston."— Presentation transcript:

1 Selecting Suspicious Messages in Intercepted Communication David Skillicorn School of Computing, Queens University Research in Information Security, Kingston (RISK) Math and CS, Royal Military College

2 Legal interception occurs in three main contexts: 1.Government broad-spectrum interception of communication for national defence and intelligence (e.g. Echelon). Usually excludes communication between `citizens, who are hard to identify in practice. Usually a simple surrogate rule is applied. 2.Law enforcement interception pursuant to a warrant. 3.Organizational interception, typically and IM, for improper behaviour (e.g. SOX), criminal activity, and industrial espionage. Lots of other communication takes place in public, and so is always available for examination: chat, blogs, web pages

3 For governments and organizations, the volumes of data intercepted are large: 3 billion messages per day for Echelon; 1 TB /day for the CIA. Finding anything interesting in this torrent is a challenge – 1 in a million or less. Early stage processing must concentrate on finding the definitely uninteresting, so that it can be discarded. Selecting the potentially interesting can be done downstream, using more sophistication because the volumes are much smaller.

4 The main approach for selection: Use a set of keywords whose presence causes a message to be selected for further analysis. Example: German Federal Intelligence Service: nuclear proliferation (2000 terms), arms trade (1000), terrorism (500), drugs (400), as of 2000 (certainly changed now). It also seems plausible that a range of other techniques are applied, based on properties such as: content overlap, sender/receiver identities, times of transmission, specialized word use, etc.. (Social Network Analysis)

5 General strategy considerations

6 1. Models that assume that the problem is to discover the boundary between good and bad based on some fixed set of properties can be defeated easily. The carnival booth approach – probe to learn the boundary, then avoid it. Looking for a fixed set of anomalies means missing an unexpected anomaly. Randomizing the boundary can help. Better to look for anything unusual rather than the expected unusual.

7 2. It is hard for humans to behave unnaturally in a natural way. This is even more true when the behaviour is subconscious. e.g. Stephen Potters Oneupmanship for tennis players customs interviews digit choices on tax returns/accounts/false invoices So theres an inherent signature to unnatural behaviour, in any context.

8 3. Create a big, obvious, primary detection system … then create a secondary detection system that looks for reaction (evasion) of the first system ! Innocent people either dont know about or dont react to such a system; but those who are being deceptive cannot afford not to. (The more the primary system looks for markers that are subconsciously generated, the harder it is to react appropriately.) The boundary between innocence and reaction is often easier to detect than the boundary between innocence and deception.

9 How does this apply to communication? Most informal communication relies on subconscious mechanisms governing textual markers such as word choice, and voice markers such as pitch. Awareness of simple surveillance measures may cause problems with these mechanisms, creating detectable changes. The presence of a watchlist of words suggests substituting innocuous words – but word choice is also partly a subconscious process.

10 Detecting substitution in conversations

11 Replacing words that might be on the keyword watch list by other words or locutions could prevent messages from being selected based on their content. But … knowing that there is a watch list is not the same thing as knowing whats on it: `bomb is probably a word not to use; What about: `fertilizer, `meeting, `suicide, …? A keyword watch list plays the role of a primary selection mechanism; it doesnt matter that its existence is known, but it does matter that some of its details are unknown. Randomization can even be useful.

12 Substitution can be: * based on a codebook (e.g. `attack = `wedding) * generated on the fly We expect that most substitutions on the fly will replace a word with a new word whose natural frequency is quite different: `attack is the 1072 nd most common English word `wedding is the 2912nd most common English word This can be avoided, but only with some attention – more later.

13 The use of a substitution with the `wrong frequency in a number of messages may make the entire conversation unusual enough to be detected. This has the added advantage that it can put together messages that belong, even if their endpoints have been obscured.

14 Linguistic background The frequency of words in English (and many other languages) is Zipf – frequent words are very frequent, and frequency drops off very quickly. We restrict our attention to nouns. In English Most common noun – time 3262 nd most common noun – quantum

15 A message-frequency matrix has a row corresponding to each message, and a column corresponding to each noun. The ij th entry is the frequency of noun j in message i. The matrix is very sparse. We generate artificial datasets using a Poisson distribution with mean f * 1/j+1, where f models the base frequency. We add 10 extra rows representing the correlated threat messages, using a block of 6 columns, uniformly randomly 0s and 1s, added at columns

16 messages nouns

17 Technology – Matrix decompositions. The basic idea: * Treat the dataset as a matrix, A, with n rows and m columns; * Factor A into the product of two matrices, C and F A = C F where C is n x r, F is r x m and r is smaller than m. Think of F as a set of underlying `real somethings and C as a way of `mixing these somethings together to get the observed attribute values. Choosing r smaller than m forces the decomposition to somehow represent the data more compactly. F A = C

18 Two matrix decompositions are useful : Singular value decomposition (SVD) – the rows of F are orthogonal axes such that the maximum possible variation in the data lies along the first axis; the maximum of what remains along the second, and so on. The rows of C are coordinates in this space. Independent component analysis (ICA) – the rows of F are statistically independent factors. The rows of C describe how to mix these factors to produce the original data. Strictly speaking, the row of C are not coordinates, but we can plot them to get some idea of structure.

19 First 3 dimensions – SVD The messages with correlated unusual word usage are marked with red circles

20 First 3 dimensions – ICA

21 (Fortunately) both unusual word use and correlated word use are necessary to make such messages detectable. Correlation with proper word frequencies (SVD) So ordinary conversations dont show up as false positives!!

22 Correlation with proper word frequencies (ICA)

23 Uncorrelated with unusual word frequencies (SVD) Conversations about unusual things dont show up as false positives either!!

24 Uncorrelated with unusual word frequencies (ICA)

25 This trick permits a new level of sophistication in connecting related messages into conversations when the usual indicators are not available. It does exactly the right thing – ignoring conversations about ordinary topics, and conversations about unusual topics, but homing in on conversations about unusual topics using inappropriate words. Because the dataset is sparse, SVD takes time linear in the number of messages. The complexity of ICA is less clear but there are direct hardware implementations (SOSUS).

26 Message-rank matrices are useful because they defend against the countermeasure of rules like use the word 5 ranks below the one you want to use. A message-rank matrix has a row corresponding to each message, and a column corresponding to the rank, in English, of the j th most frequent noun in the message. Message-rank matrices have many fewer columns, which makes them easier and faster to work with (e.g. Enron dataset: 200,000+ words but average number of nouns per message <200).

27 messages rank of jth noun in message

28 Replacing words with those, say, five positions down the list does not show up in the SVD of a message-frequency matrix:

29 But its very clear in the SVD of a message-rank matrix:

30 Detecting substitutions in individual messages

31 What about if the substitution is a word with the same natural frequency? Can we still detect the substitution because of a bump in the flow of the sentence? The graph of adjacent words in English has a small world property – paths outwards to rare words quickly return to the common-word centre. So looking at frequencies of pairs (triples) of words is not very revealing, e.g. Everything is ready for the watch the watch for the watch ready for the watch is ready for the watch get slowly more unusual

32 Weve developed a number of measures for oddity of a word in a context. Each one independently is quite weak. However, combining them produces a usable detector. We use Googles responses as a surrogate for frequencies of words, quoted phrases, and bags of words in English. Google sees a lot of text… but its a blunt instrument because we only use the number of documents returned as a measure of frequency (this doesnt seem to matter); and Googles treatment of stop words is a bit awkward.

33 Measures I: Contextualized frequency When a word is appropriate in a sentence, the frequencies f{ the, cat, sat, on, the mat } and f{ the, sat, on, the, mat } should be quite similar. But … f{ the, unicorn, sat, on, the mat } and f{ the, sat, on, the mat} should be very different. This could signal that `unicorn is a substituted word.

34 So we define sentence oddity to be frequency of bag of words with word of interest omitted frequency of bag of words containing all words The larger this measure is, the more likely that the word of interest is a substitution (we hope). We use the frequency of a bag of words because most strings of any length dont occur at all, even at Google. However, short strings might occur with measurable frequency – this is the basis of our second measure.

35 Measures II: k-gram frequency Given a word of interest, its left k-gram is the string preceding the word of interest up to and including the first non-stopword. right k-gram is the string following the word of interest up to and including the first non-stopword. A nine mile walk is no joke (f = 33) left k-gram: mile walk (f = 50) right k-gram: walk is no joke (f = 876,000)

36 Using a k-gram avoids the problems of a small-world adjacency graph – it ignores visits to the (boring) middle region of the graph, but captures connections between close visits to the outer layers. Its a way to get a kind of 2-gram, both of whose words are non- trivial. If the word of interest is a substitute, both its left and right k- grams should be small. Left and right k-grams measure very different properties of sentences.

37 Measures III: Hypernym oddity The hypernym of a noun is a more general term that includes the class of things described by the noun. e.g. broodmare – mare – horse – equine – odd-toed ungulate – hoofed mammal – mammal – vertebrate Notice that the chain oscillates between ordinary words and technical terms. In informal text, ordinary words are much more likely than technical terms. However, a substitution might be a much less ordinary word in this context.

38 We define the hypernym oddity to be f( bag of words with word of interest replaced by its hypernym) – f( bag of words with word of interest) We expect this measure to be positive when the word of interest is a substitution, and close to zero or negative when the word is appropriate. Although hypernyms are semantic relatives of the original words, we can get them automatically using Wordnet – although there are usually multiple hypernyms and we cant tell which one is `right automatically.

39 Pointwise mutual information (PMI) PMI = f (word) f (adjacent region) f (word + adjacent region) where + is concatenation in either direction, and the maximum is taken over all adjacent regions that have non-zero frequencies. PMI blends some of the properties of sentence oddity and k-grams. It looks for stable phrases (those with high frequency).

40 We use the Enron corpus (and the Brown news corpus), and extract sentences at random. In , such sentences are often very unusual – typos, shorthand, technical terms. So its difficult data to work with. We replaced the first noun in each Enron sentence by the noun with closest frequency, using the BNC frequency rankings, and removed sentences where the new noun wasnt known to Wordnet or the sentence (as a BoW) occurred too infrequently at Google. This left a set of 1714 ordinary sentences and a set of 1714 sentences containing a noun substitution. Having two sets of sentences allowed us to train a decision tree on each of the measures to determine a good boundary value between ordinary and substitution sentences.

41 MeasureDetection rate False positive rateBoundary Sentence oddity Enhanced sentence oddity Left k-gram Right k-gram Average k-gram Minimum hypernym Maximum hypernym Average hypernym PMI Combined9511 Enron dataset

42 Each individual measure is very weak. However, they make their errors on different sentences, so combining their predictions does much better than any individual measures. Results for the Brown corpus are similar, although (surprisingly) a little weaker – we expected that more formal language would make substitutions easier to spot. This may reflect changing writing styles, under-represented at Google. Results are the same when Yahoo is used as a frequency oracle.

43 Detecting offline connections using online word usage

44 Analysing a matrix whose rows represent the s of individuals, whose columns represent words, and whose entries are the frequency of use of words by individuals in Enron s allows us to address questions such as: * Does word usage vary with company role, either explicit or implicit? * Do people who communicate offline develop similarities in their word usage? * Can changing word usage over time reveal changing offline relationships? YES to all three.

45 Enron 1999

46 Enron 2000

47 Enron 1 st half of 2001

48 Enron 2 nd half of 2001

49 Detecting mental state using word usage

50 Humans leak information about their mental states quite strongly – but we are not wired to notice. The leakage comes via frequencies and frequency changes in little words, such as pronouns. Detection via software is straightforward. Detecting mental state means that we can: * decide which parts of bin Ladens messages he believes and which are pitched for particular audiences * distinguish between testosterone and terrorism on Salafist websites * assess the truthfulness of witnesses (and politicians)

51 Weve had some success with detecting: * deceptive s in the Enron corpus * speeches with spin in the Winter 2006 federal election * testimony at the Gomery commission Validation is still an issue; also differences in the signature of deception in different contexts.

52 Summary: Language production is mostly a subconscious process, so it is hard to use it unnaturally. Even with knowledge of detection systems, it is difficult to adjust language production to remain concealed. This can be exploited using layered detection systems, with the second layer looking for reaction to the existence of the first layer. This kind of `shallow analysis of language hasnt been explored much, so theres lots of potential for new, powerful detection techniques.

53 ?


Download ppt "Selecting Suspicious Messages in Intercepted Communication David Skillicorn School of Computing, Queens University Research in Information Security, Kingston."

Similar presentations


Ads by Google