Presentation is loading. Please wait.

Presentation is loading. Please wait.

5(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0.

Similar presentations


Presentation on theme: "5(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0."— Presentation transcript:

1 5(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Collaborative filtering, AKA customer preference prediction, AKA Business Intelligence, is critical for on-line retailers (Netflix, Amazon, Yahoo...). It's just classical classification: based on a rating history training set, predict how customer, c, would rate item, i? Use relationships to find "neighbors" to predict rating(c=3,i=5)? 5(C,I) 0010 0000 0100 0001 1 2 3 4 C I 2345 C 2345 1(I,C) 0001 0000 0010 0100 1 2 3 4 I Rolodex Relationship model 4(C,I) 0000 0000 0000 1000 1 2 3 4 C I 2345 3(C,I) 0100 1010 0000 0000 1 2 3 4 C I 2345 2(C,I) 1000 0101 1000 0000 1 2 3 4 C I 2345 1(C,I) 0001 0000 0010 0100 1 2 3 4 C I 2345 Binary Relationship model Find all customers whose rating history is similar to that of c=3. I.e., for each rating, k=1,2,3,4,5, find all other customers who give that rating to the movies that c=3 gives that rating to, which is k k 3 where k is a customer pTree from the relationship k(C,I). Then find the intersection of those k- CustomerSet: &k k 3 and let those resulting customers vote or predict rating(c=3,i=5) TrainingSet C I Rating 1 2 2 1 3 3 1 4 5 1 5 1 2 2 3 2 3 2 2 4 3 2 5 2 3 2 2 3 3 5 3 4 1 3 5 4 2 4 4 3 1 4 4 5 5 1(C,I) 0001 0000 0010 0100 2(I,C) 1000 0101 1000 0000 1 2 3 4 C I 2345 1 2 3 4 C 3(I,C) 0100 1010 0000 0000 I 2345 4(I,C) 0000 0000 0000 1000 1 2 3 4 C 5(I,C) 0010 0000 0100 0001 I 2345 Multihop Relationship model

2 5(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1(C,I) 0001 0000 0010 0100 2(I,C) 1000 0101 1000 0000 1 2 3 4 C I 2345 1 2 3 4 C 4(I,C) Multihop model 0100 1010 0000 0000 I 2345 1 2 3 4 C 5(I,C) 0010 0000 0100 0001 Collaborative filtering (AKA: customer preference prediction or Business Intelligence) is critical for on-line retailing (e.g., Netflix, Amazon, Yahoo...). Use MRRYH to predict rating(c=3, i=5)? 5(C,I) 0010 0000 0100 0001 1 2 3 4 C I 2345 C 2345 1(I,C) 0001 0000 0011 0100 1 2 3 4 I Rolodex model 4(C,I) 0000 0000 0000 1000 1 2 3 4 C I 2345 3(C,I) 0100 1010 0000 0000 1 2 3 4 C I 2345 2(C,I) 1000 0101 1000 0000 1 2 3 4 C I 2345 1(C,I) 0001 0000 0011 0100 1 2 3 4 C I 2345 Binary model Approach 2: Judging that rating=3 is "no opinion", focus (count) on the middle customer axis??????

3 50% Satlog-Landsat stride=64, classes: redsoil cotton greysoil dampgreysoil stubble verydampgreysoil 255... 1 0 R Rir2 0100 1010 0000 0000 255... 1 1 R ir 2 01...255 Rir1 1000 0101 1000 0000 255... 1 0 R ir 1 02...255 RG 0001 0000 0011 0100 255... 1 0 R G 01 255r cl cgdsv Rclass 000 000 000 100 0 0 0 0 0 0 1 0 0 1 0 0 Gir1 1010 0101 0000 0000 255... 1 0 G ir 1 01...255... 1 0 G Gir2 0100 1010 0000 0000 255... 1 0 G ir 2 01...255 Gclass 000 000 000 100 0 0 0 0 0 0 1 0 0 1 0 0 255... 1 0 ir1 ir1ir2 0100 1010 0000 0000 255... 1 0 ir1 ir 2 01...255 ir1class 000 000 000 100 0 0 0 0 0 0 1 0 0 1 0 0 255... 1 0 ir2 ir2class 000 000 000 100 0 0 0 0 0 0 1 0 0 1 0 0 r cl cgdsv r cgdsv r cgdsv For 50% Satlog-Landsat stride=320, we get: Note that for stride=320, the means are way off and it therefore will probably produce very inaccurate classification.. A level-0 pVector is a bit string with 1 bit per record. A level-1 pVector is a bit string with 1 bit per record stride which gives truth of a predicate applied to record stride. A n-level pTree consists of a level-k pVector (k=0...n-1) all with the same predicate and s.t. each level-k stride is a contained within one level-k-1 stride. 320-bit strides start end cls cls 320 stride 2 1073 1 1 2 321 1074 1552 2 1 322 641 1553 2513 3 1 642 961 2514 2928 4 2 1074 1393 2929 3398 5 3 1553 1872 3399 4435 _7 3 1873 2192 4436 3 2193 2512 4 2514 2833 5 2929 3248 7 3399 3718 7 3719 4038 7 4039 4358 R G ir1 ir2 cls means stds means stds means stds means stds 1 64.33 6.80 104.33 3.77 112.67 0.94 100.00 16.31 2 46.00 0.00 35.00 0.00 98.00 0.00 66.00 0.00 3 89.33 1.89 101.67 3.77 101.33 3.77 85.33 3.77 4 78.00 0.00 91.00 0.00 96.00 0.00 78.00 0.00 5 57.00 0.00 53.00 0.00 66.00 0.00 57.00 0.00 7 67.67 1.70 76.33 1.89 74.00 0.00 67.67 1.70

4 50% stride=64 R 1 2 3 4 5 7 cls 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 1 40 41 42 1 43 1 44 2 45 46 4 47 48 49 50 51 2 52 53 1 54 55 4 56 57 1 2 58 1 59 2 60 1 61 1 62 63 64 2 2 2 65 66 1 2 2 67 1 3 68 1 1 2 69 70 2 71 1 4 72 73 74 75 76 77 78 2 79 80 81 82 83 84 2 85 86 87 88 8 89 1 90 91 92 3 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 G 1 2 3 4 5 7 cls 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1 33 2 34 1 35 36 1 37 38 39 1 40 41 42 43 1 44 45 46 47 48 1 49 50 51 1 52 1 53 1 54 55 5 56 57 58 59 60 61 1 62 63 64 65 66 67 1 1 68 69 70 71 72 73 3 74 75 2 7 76 77 78 79 4 80 81 82 83 1 84 85 86 87 88 89 90 91 4 92 93 94 95 96 97 2 98 2 99 2 7 100 101 102 103 3 2 104 105 106 107 1 3 108 109 110 111 4 112 113 114 115 1 116 117 118 119 120 121 122 123 124 125 126 ir1 1 2 3 4 5 7 cls 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 65 2 66 2 67 1 68 1 69 70 1 71 72 1 73 1 74 1 6 75 76 77 1 78 79 2 80 81 82 1 83 84 85 86 1 87 88 89 90 91 92 1 93 94 95 96 3 1 97 1 1 98 1 99 100 1 101 1 102 103 104 6 1 105 1 106 1 107 108 109 110 111 112 6 1 113 114 3 1 3 115 116 2 1 117 118 1 119 120 121 122 4 123 124 1 125 126 ir2 1 2 3 4 5 7 cls 14 1 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 2 58 1 59 2 60 1 61 1 62 63 64 1 1 2 65 1 66 2 1 2 67 2 3 68 1 1 2 69 1 70 2 71 4 72 73 74 75 1 76 77 78 2 79 80 81 1 82 1 1 83 1 1 84 1 85 86 1 87 88 1 7 89 1 90 2 1 91 1 92 1 93 94 1 95 2 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 1 121 122 123 1 124 125 126 1

5 r r r v v r m r r v v v r r v m v v r v v r v APPENDIX: FAUST Oblique formula: P (X o d)<a X any set of vectors (e.g., a training class). To separate r s from v s using means_midpoint as the cut-point, calculate a as follows: a Viewing m r, m v as vectors ( e.g., m r ≡ origin  pt_m r ), a = ( m r +(m v -m r )/2 ) o d = (m r +m v )/2 o d d D≡ mrmv.D≡ mrmv. Let d = D/|D|. What if d points away from the intersection,, of the Cut-hyperplane (Cut-line in this 2-D case) and the d-line (as it does for class=V, where d = (m v  m r )/|m v  m r | ? Then a is the negative of the distance shown (the angle is obtuse so its cosine is negative). But each v o d is a larger negative number than a=(m r +m v )/2 o d, so we still want v o d < ½(m v +m r ) o d d

6 r r r v v r m r r v v v r r v m v v r v v r v P X o d < a = P  d i X i <a FAUST Oblique vector of stds D≡ m r  m v, d=D/|D| To separate r from v: Using the vector of stds cutpoint, calculate a as follows: d Viewing m r, m v as vectors, a = ( m r + m v ) o d std r +std v std r std r +std v std v What are the purple stds? approach-1: for each coordinate (or dimension) calculate the stds of the coordinate values and for the vector of those stds. Let's remind ourselves that the formula given Md's formula, does not require looping through the X-values but requires only one AND program across the pTrees. P X o d < a = P  d i X i <a

7 r r r v v r m r r v v v r r v m v v r v v r v pm r | P Xod<a = P  d i X i <a FAUST Oblique D≡ m r  m v, d=D/|D| Approach 2 To separate r from v: Using the stds of the projections, calculate a as follows: d r|r| r|r| |r|r |r|r |r|r pm v | v|v| v|v| |v|v |v|v |v|v a = pm r + (pm v -pm r ) = pstd r +pstd v pstd r pm r *pstd r + pm r *pstd v + pm v *pstd r - pm r *pstd r pstd r +pstd v By pm r, we mean this distance, m r o d, which is also mean{r o d|r  R} By pstd r, std{r o d|r  R} next? pm r + (pm v -pm r ) = pstd v +2pstd r 2pstd r pm r *2pstd r + pm r *pstd v + pm v *2pstd r - pm r *2pstd r 2pstd r +pstd v In this case the predicted classes will overlap (i.e., a given sample point may be assigned multiple classes) therefore we will have to order the class predictions.

8 FAUST Satlog evaluation R G ir1 ir2 mn 62.83 95.29 108.12 89.50 1 48.84 39.91 113.89 118.31 2 87.48 105.50 110.60 87.46 3 77.41 90.94 95.61 75.35 4 59.59 62.27 83.02 69.95 5 69.01 77.42 81.59 64.13 7 R G ir1 ir2 std 8 15 13 9 1 8 13 13 19 2 5 7 7 6 3 6 8 8 7 4 6 12 13 13 5 5 8 9 7 7 Oblique level-0 using midpoint of means 1's 2's 3's 4's 5's 7's True Positives: 322 199 344 145 174 353 False Positives: 28 3 80 171 107 74 NonOblique lev-0 1's 2's 3's 4's 5's 7's True Positives: 99 193 325 130 151 257 Class Totals-> 461 224 397 211 237 470 NonOblq lev-1 50% 1's 2's 3's 4's 5's 7's True Positives: 212 183 314 103 157 330 False Positives: 14 1 42 103 36 189 Oblique level-0 using means and stds of projections (w/o cls elim) 1's 2's 3's 4's 5's 7's True Positives: 359 205 332 144 175 324 False Positives: 29 18 47 156 131 58 Oblique lev-0, means, stds of projections (w cls elim in 2345671 order) Note that none occurs 1's 2's 3's 4's 5's 7's True Positives: 359 205 332 144 175 324 False Positives: 29 18 47 156 131 58 a = pm r + (pm v -pm r ) = pstd v +2pstd r 2pstd r pm r *pstd v + pm v *2pstd r pstd r +2pstd v Oblique level-0 using means and stds of projections, doubling pstd 1's 2's 3's 4's 5's 7's True Positives: 410 212 277 179 199 324 False Positives: 114 40 113 259 235 58 Oblique lev-0, means, stds of projs, doubling pstd r, classify, eliminate in 2,3,4,5,7,1 ord 1's 2's 3's 4's 5's 7's True Positives: 309 212 277 154 163 248 False Positives: 22 40 65 211 196 27 So the number of FPs is drastically reduced and TPs somewhat reduced. Is that better? If we parameterize the 2 (doubling) and adjust to max TPs and min FPs, what is the optimal multiplier parameter value? Next, low-to-high std elimination ordering. Oblique lev-0, means,stds of projs, doubling pstd r, classify, elim 3,4,7,5,1,2 ord 1's 2's 3's 4's 5's 7's True Positives: 329 189 277 154 164 307 False Positives: 25 1 113 211 121 33 above=(std+stdup)/gap below=(std+stddn)/gapdn suggest ord 425713 abv below abv below abv below abv below avg 1 4.33 2.10 5.29 2.16 1.68 8.09 13.11 0.94 4.71 2 1.30 1.12 6.07 0.94 2.36 3 1.09 2.16 8.09 6.07 1.07 13.11 5.27 4 1.31 1.09 1.18 5.29 1.67 1.68 3.70 1.07 2.12 5 1.30 4.33 1.12 1.32 15.37 1.67 3.43 3.70 4.03 7 2.10 1.31 1.32 1.18 15.37 3.43 4.12 red green ir1 ir2 cls avg 4 2.12 2 2.36 5 4.03 7 4.12 1 4.71 3 5.27 2s1/(2s1+s2) elim ord: 425713 TP: 355 205 224 179 172 307 FP: 37 18 14 259 121 33 1 2 3 4 5 7 tot 359 205 332 144 175 324 1539 TP s1/(s1+s2) 29 18 47 156 131 58 439 FP 410 212 277 179 199 324 1601 TP 2s1/(2s1+s2) 114 40 113 259 235 58 819 FP no elim ord 309 212 277 154 163 248 1363 TP 2s1/(2s1+s2) 22 40 65 211 196 27 561 FP 234571 329 189 277 154 164 307 1420 TP 2s1/(2s1+s2) 25 1 113 211 121 33 504 FP 347512 355 189 277 154 164 307 1446 TP 2s1/(2s1+s2) 37 18 14 259 121 33 482 FP 425713 355 189 277 154 164 307 1446 TP s1/(s1+s2) 37 18 14 259 121 33 482 FP level1 50%

9 Can MYRRH classify? (pixel classification?) Try 4-hop using attributes of IRIS(Cls,SL,SW,PL,PW) stride=10 level-1 val SL SW PL PW setosa 38 38 14 2 setosa 50 38 15 2 setosa 50 34 16 2 setosa 48 42 15 2 setosa 50 34 12 2 versicolor 1 24 45 15 versicolor 56 30 45 14 versicolor 57 28 32 14 versicolor 54 26 45 13 versicolor 57 30 42 12 virginica 73 29 58 17 virginica 64 26 51 22 virginica 72 28 49 16 virginica 74 30 48 22 virginica 67 26 50 19 SL SW PL rnd(PW/10) 4 4 1 0 5 4 2 0 5 3 2 0 5 4 2 0 5 3 1 0 0 2 5 2 6 3 5 1 6 3 3 1 5 3 5 1 6 3 4 1 7 3 6 2 6 3 5 2 7 3 5 1 7 3 5 2 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 1 0 PL 0 1 2 3 4 5 6 7 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 SW 0 1 2 3 4 5 6 7 PW01234567PW01234567 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 SL01234567SL01234567 CLS se ve vi 0 1 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 1 C={se} A={3,4} A  C confident? = 1/2 ct( & pw  & sw  A R sw S pw & sl  & cls  C U cls T sl )/ ct(& pw  & sw  A R sw S pw ) R S T U pl={1,2} pl={1}

10 1-hop: IRIS(Cls,SL,SW,PL,PW) stride=10 level-1 val SL SW PL PW setosa 38 38 14 2 setosa 50 38 15 2 setosa 50 34 16 2 setosa 48 42 15 2 setosa 50 34 12 2 versicolor 1 24 45 15 versicolor 56 30 45 14 versicolor 57 28 32 14 versicolor 54 26 45 13 versicolor 57 30 42 12 virginica 73 29 58 17 virginica 64 26 51 22 virginica 72 28 49 16 virginica 74 30 48 22 virginica 67 26 50 19 SL SW PL rnd(PW/10) 4 4 1 0 5 4 2 0 5 3 2 0 5 4 2 0 5 3 1 0 0 2 5 2 6 3 5 1 6 3 3 1 5 3 5 1 6 3 4 1 7 3 6 2 6 3 5 2 7 3 5 1 7 3 5 2 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 SW 0 1 2 3 4 5 6 7 C={se} A={3,4} 1-hop A  C is more confident: = 1 R sw= {3,4} CLS se ve vi ct(R A & cls  {se} R cls ) / ct(R A ) sw= {3,4} sw= {3,4} But what about just taking R {class} ? Gives {3,4}  se {2,3}  ve {3}  vi This is not very differentiating of class. Include the other three? 0 0 0 2 3 0 0 0 0 0 1 4 0 0 0 0 0 0 0 5 0 0 0 0 SW 0 1 2 3 4 5 6 7 CLS se ve vi 0 0 0 0 1 4 0 0 1 0 0 0 0 1 3 0 0 0 0 0 0 0 1 4 SL 0 1 2 3 4 5 6 7 CLS se ve vi 0 2 3 0 0 0 0 0 0 0 0 1 1 3 0 0 0 0 0 0 0 4 1 0 PL 0 1 2 3 4 5 6 7 CLS se ve vi 5 0 0 0 0 0 0 0 0 4 1 0 0 0 0 0 0 1 4 0 0 0 0 0 PW 0 1 2 3 4 5 6 7 CLS se ve vi {4,5}  se{5,6}  ve{6,7}  vi {3,4}  se{2,3}  ve{3}  vi {1,2}  se{3,4,5}  ve{5,6}  vi {0}  se{1,2}  ve{1,2}  vi These rules were derived from the binary relationships only. A minimal Decision Tree Classifier suggested by the rules: / \ PW=0 else | sePL  {3,4} & SW=2 & SL=5 else | ve 2 of 3 of: else PL  {3,4,5} | SW={2,3} vi SL={5,6} | ve I was hoping for a "Look at that!" but it didn't happen ;-)

11 2-hop stride=10 level-1 val SL SW PL PW setosa 38 38 14 2 setosa 50 38 15 2 setosa 50 34 16 2 setosa 48 42 15 2 setosa 50 34 12 2 versicolor 1 24 45 15 versicolor 56 30 45 14 versicolor 57 28 32 14 versicolor 54 26 45 13 versicolor 57 30 42 12 virginica 73 29 58 17 virginica 64 26 51 22 virginica 72 28 49 16 virginica 74 30 48 22 virginica 67 26 50 19 SL SW PL rnd(PW/10) 4 4 1 0 5 4 2 0 5 3 2 0 5 4 2 0 5 3 1 0 0 2 5 2 6 3 5 1 6 3 3 1 5 3 5 1 6 3 4 1 7 3 6 2 6 3 5 2 7 3 5 1 7 3 5 2 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 3 0 0 1 0 0 0 0 0 1 1 2 0 0 0 0 0 0 0 3 1 0 PL 0 1 2 3 4 5 6 7 SL01234567SL01234567 CLS se ve vi 0 1 0 0 0 0 1 0 0 4 1 0 0 3 1 0 0 4 T U ct(OR pl  A T pl & cls  C U cls ) / ct(OR pl  A T pl ) A={1,2} C={se} =1 Mine out all confident se-rules with minsup = 3/4: sl={4,5} Closure: If A  {se} is nonconfident and A  U se then B  {se} is nonconfident for all B  A. So starting with singleton A's: ct(T pl=1 & U se ) / ct(T pl=1 ) = 2/2 yes. ct(T pl=2 & U se ) / ct(T pl=2 ) = 1/1 yes. ct(T pl=3 & U se ) / ct(T pl=3 ) = 0/1 no. ct(T pl=4 & U se ) / ct(T pl=4 ) = 0/1 no. ct(T pl=5 & U se ) / ct(T pl=5 ) = 1/2 no. ct(T pl=6 & U se ) / ct(T pl=6 ) = 0/1 no. etc. A= {1,3} {1,4} {1,5} or {1,6} will yield nonconfidence and A  U se so all supersets will yield nonconfidence. A= {2,3} {2,4} {2,5} or {2,6} will yield nonconfidence but the closure property does not apply. A= {1,2} will yield confidence. I conclude that this closure property is just too weak to be useful. And also it appears from this example that trying to use myrrh to do classification (at least in this way) does not appear to be productive.

12 Lev2-50% stride640, classes: redsoil cotton greysoil dampgreysoil stubble verydampgreysoil RG 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 R 7 6 5 4 3 2 1 123 G 4567r cl cgdsv Rir1 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 R 7 6 5 4 3 2 1 123 ir1 4567 Rir2 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 R 7 6 5 4 3 2 1 123 ir2 4567 Rclass 001 100 001 100 000 011 100 000 010 000 010 000 001 010 R 7 6 5 4 3 2 1 r cl cgdsv Gir1 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 G 7 6 5 4 3 2 1 123 ir1 4567 Gir2 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 G 7 6 5 4 3 2 1 123 ir2 4567 Gclass 001 100 001 100 000 011 100 000 010 000 010 000 001 010 G 7 6 5 4 3 2 1 r cl cgdsv ir1ir2 001 100 001 100 000 011 100 0000 0100 0001 0100 0000 0011 0100 ir1 7 6 5 4 3 2 1 123 ir2 4567 ir1class 001 100 001 100 000 011 100 000 010 000 010 000 001 010 G 7 6 5 4 3 2 1 r cl cgdsv ir2class 001 100 001 100 000 011 100 000 010 000 010 000 001 010 ir2 7 6 5 4 3 2 1


Download ppt "5(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2(I,C) 0 0 0 0 0 0 0 0 0 0 0 0 0 0."

Similar presentations


Ads by Google