# CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.

## Presentation on theme: "CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson."— Presentation transcript:

CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson & Hennessy, ©2005 Some slides and/or pictures in the following are adapted from: slides ©2008 UCB

 32-bit signed numbers (2’s complement): 0000 0000 0000 0000 0000 0000 0000 0000 two = 0 ten 0000 0000 0000 0000 0000 0000 0000 0001 two = + 1 ten... 0111 1111 1111 1111 1111 1111 1111 1110 two = + 2,147,483,646 ten 0111 1111 1111 1111 1111 1111 1111 1111 two = + 2,147,483,647 ten 1000 0000 0000 0000 0000 0000 0000 0000 two = – 2,147,483,648 ten 1000 0000 0000 0000 0000 0000 0000 0001 two = – 2,147,483,647 ten... 1111 1111 1111 1111 1111 1111 1111 1110 two = – 2 ten 1111 1111 1111 1111 1111 1111 1111 1111 two = – 1 ten Number Representations maxint minint  Converting <32-bit values into 32-bit values l copy the most significant bit (the sign bit) into the “empty” bits 0010 -> 0000 0010 1010 -> 1111 1010 l sign extend versus zero extend ( lb vs. lbu ) MSB LSB

MIPS Arithmetic Logic Unit (ALU)  Must support the Arithmetic/Logic operations of the ISA add, addi, addiu, addu sub, subu mult, multu, div, divu sqrt and, andi, nor, or, ori, xor, xori beq, bne, slt, slti, sltiu, sltu 32 m (operation) result A B ALU 4 zeroovf 1 1  With special handling for l sign extend – addi, addiu, slti, sltiu l zero extend – andi, ori, xori l overflow detection – add, addi, sub

Dealing with Overflow OperationOperand AOperand BResult indicating overflow A + B≥ 0 < 0 A + B< 0 ≥ 0 A - B≥ 0< 0 A - B< 0≥ 0  Overflow occurs when the result of an operation cannot be represented in 32-bits, i.e., when the sign bit contains a value bit of the result and not the proper sign bit  When adding operands with different signs or when subtracting operands with the same sign, overflow can never occur  MIPS signals overflow with an exception (aka interrupt) – an unscheduled procedure call where the EPC contains the address of the instruction that caused the exception

Two ’ s Complement Arithmetic  Addition is accomplished by adding the codes, ignoring any final carry  Subtraction: change the sign and add  16 + (-23) =?  16 - (-23) =?  -23 - (-16) =?

Multiply  Binary multiplication is just a bunch of right shifts and adds multiplicand multiplier partial product array double precision product n 2n n can be formed in parallel and added in parallel for faster multiplication

Multiplication Example 1011 Multiplicand (11 dec) x 1101 Multiplier (13 dec) 1011 Partial products 0000 Note: if multiplier bit is 1 copy 1011 multiplicand (place value) 1011 otherwise zero 10001111 Product (143 dec) Note: need double length result

Add and Right Shift Multiplier Hardware multiplicand 32-bit ALU multiplier Control add shift right product

Add and Right Shift Multiplier Hardware multiplicand 32-bit ALU multiplier Control add shift right product 0 1 1 0 = 6 0 0 0 0 0 1 0 1 = 5 add 0 1 1 0 0 1 0 1 0 0 1 1 0 0 1 0 0 0 0 1 1 0 0 1 add 0 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 1 1 1 0 0 = 30

Unsigned Binary Multiplication

Execution of Example

Multiplying Negative Numbers  This does not work!  Solution 1 l Convert to positive if required l Multiply as above l If signs were different, negate answer  Solution 2 Booth ’ s algorithm

Booth ’ s Algorithm

Example of Booth ’ s Algorithm

 Multiply ( mult and multu ) produces a double precision product mult \$s0, \$s1 # hi||lo = \$s0 * \$s1 Low-order word of the product is left in processor register lo and the high-order word is left in register hi Instructions mfhi rd and mflo rd are provided to move the product to (user accessible) registers in the register file MIPS Multiply Instruction 0 16 17 0 0 0x18  Multiplies are usually done by fast, dedicated hardware and are much more complex (and slower) than adders

MIPS Multiplication  Two 32-bit registers for product l HI: most-significant 32 bits l LO: least-significant 32-bits  Instructions l mult rs, rt / multu rs, rt -64-bit product in HI/LO l mfhi rd / mflo rd -Move from HI/LO to rd -Can test HI value to see if product overflows 32 bits l mul rd, rs, rt -Least-significant 32 bits of product –> rd

Division  Division is just a bunch of quotient digit guesses and left shifts and subtracts dividend = quotient x divisor + remainder dividend divisor partial remainder array quotient n n remainder n 000 0 0 0

001111 Division of Unsigned Binary Integers 1011 00001101 10010011 1011 001110 1011 100 Quotient Dividend Remainder Partial Remainders Divisor

Left Shift and Subtract Division Hardware divisor 32-bit ALU quotient Control subtract shift left dividend remainder

Left Shift and Subtract Division Hardware divisor 32-bit ALU quotient Control subtract shift left dividend remainder 0 0 1 0 = 2 0 0 0 0 0 1 1 0 = 6 0 0 0 0 1 1 0 0 sub 1 1 1 0 1 1 0 0rem neg, so ‘ient bit = 0 0 0 0 0 1 1 0 0restore remainder 0 0 0 1 1 0 0 0 sub 1 1 1 1 1 1 0 0rem neg, so ‘ient bit = 0 0 0 0 1 1 0 0 0restore remainder 0 0 1 1 0 0 0 0 sub 0 0 0 1 0 0 0 1 rem pos, so ‘ient bit = 1 0 0 1 0 sub 0 0 0 0 0 0 1 1 rem pos, so ‘ient bit = 1 = 3 with 0 remainder

Division of Signed Binary Integers

 Divide ( div and divu ) generates the reminder in hi and the quotient in lo div \$s0, \$s1 # lo = \$s0 / \$s1 # hi = \$s0 mod \$s1 Instructions mfhi rd and mflo rd are provided to move the quotient and reminder to (user accessible) registers in the register file MIPS Divide Instruction  As with multiply, divide ignores overflow so software must determine if the quotient is too large. Software must also check the divisor to avoid division by 0. 0 16 17 0 0 0x1A

MIPS Division  Use HI/LO registers for result l HI: 32-bit remainder l LO: 32-bit quotient  Instructions l div rs, rt / divu rs, rt l No overflow or divide-by-0 checking -Software must perform checks if required Use mfhi, mflo to access result

Representation of Fractions “Binary Point” like decimal point signifies boundary between integer and fractional parts: xx. yyyy 2121 2020 2 -1 2 -2 2 -3 2 -4 Example 6-bit representation: 10.1010 2 = 1x2 1 + 1x2 -1 + 1x2 -3 = 2.625 10 If we assume “fixed binary point”, range of 6-bit representations with this format: 0 to 3.9375 (almost 4)

Fractional Powers of 2 01.01 10.51/2 20.251/4 30.1251/8 40.06251/16 50.031251/32 60.015625 70.0078125 80.00390625 90.001953125 100.0009765625 110.00048828125 120.000244140625 130.0001220703125 140.00006103515625 150.000030517578125 i 2 -i

0.828125 and 0.1640625 Example: (done in class)

Representation of Fractions So far, in our examples we used a “fixed” binary point. What we really want is to “float” the binary point. Why? Floating binary point most effective use of our limited bits (and thus more accuracy in our number representation): … 000000.001010100000… Any other solution would lose accuracy! example: put 0.1640625 into binary. Represent as in 5-bits choosing where to put the binary point. Store these bits and keep track of the binary point 2 places to the left of the MSB With floating point rep., each numeral carries a exponent field recording the whereabouts of its binary point. The binary point can be outside the stored bits, so very large and small numbers can be represented.

6.02 10 x 10 23 radix (base) decimal pointsignificand exponent  Normalized form: no leadings 0s (exactly one digit to left of decimal point)  Alternatives to representing 1/1,000,000,000 l Normalized: 1.0 x 10 -9 l Not normalized: 0.1 x 10 -8,10.0 x 10 -10 Scientific Notation (in Decimal)

1.0 two x 2 -1 radix (base) “binary point” exponent  Computer arithmetic that supports it called floating point, because it represents numbers where the binary point is not fixed, as it is for integers Declare such variable in C as float significand Scientific Notation (in Binary)

 Normal format: +1.xxxxxxxxxx two *2 yyyy two  32-bit version (C “ float ”) 0 31 SExponent 302322 Significand 1 bit8 bits23 bits S represents Sign Exponent represents y’s Significand represents x’s Floating Point Representation

 What if result too large? Overflow!  Exponent larger than represented in 8-bit Exponent field  What if result too small? Underflow!  Negative exponent larger than represented in 8-bit Exponent field  What would help reduce chances of overflow and/or underflow? 0 1 underflow overflow Floating Point Representation

 64 bit version (C “ double ”) Double Precision (vs. Single Precision) –C variable declared as double –But primary advantage is greater accuracy due to larger significand 0 31 SExponent 302019 Significand 1 bit11 bits20 bits Significand (cont’d) 32 bits Double Precision Fl. Pt. Representation

 Next Multiple of Word Size (128 bits) l Unbelievable range of numbers l Unbelievable precision (accuracy)  Currently being worked on (IEEE 754r) l Current version has 15 exponent bits and 112 significand bits (113 precision bits)  Oct-Precision? l Some have tried, no real traction so far  Half-Precision? l Yep, that’s for a short (16 bit) en.wikipedia.org/wiki/Quad_precision en.wikipedia.org/wiki/Half_precision QUAD Precision Fl. Pt. Representation

Single Precision (DP similar):  Sign bit:1 means negative, 0 means positive  Significand: l To pack more bits, leading 1 implicit for normalized numbers l 1 + 23 bits single, 1 + 52 bits double l always true: 0 < Significand < 1 (for normalized numbers)  Note: 0 has no leading 1, so reserve exponent value 0 just for number 0 0 31 SExponent 302322 Significand 1 bit8 bits23 bits IEEE 754 Floating Point Standard

 IEEE 754 uses “biased exponent” representation l Designers wanted FP numbers to be used even if no FP hardware; e.g., sort records with FP numbers using integer compares l Wanted bigger (integer) exponent field to represent bigger numbers l 2’s complement poses a problem (because negative numbers look bigger)  1.0x 2 -1 and 1.0x2 1 (done in class) IEEE 754 Floating Point Standard

Called Biased Notation, where bias is number subtracted to get real number –IEEE 754 uses bias of 127 for single precision –Subtract 127 from Exponent field to get actual value for exponent –1023 is bias for double precision  Summary (single precision): 0 31 SExponent 302322 Significand 1 bit8 bits23 bits (-1) S x (1 + Significand) x 2 (Exponent-127) –Double precision identical, except with exponent bias of 1023 (half, quad similar) IEEE 754 Floating Point Standard

Single-Precision Range  Exponents 00000000 and 11111111 reserved  Smallest value l Exponent: 00000001  actual exponent = 1 – 127 = –126 l Fraction: 000…00  significand = 1.0 l ±1.0 × 2 –126 ≈ ±1.2 × 10 –38  Largest value l exponent: 11111110  actual exponent = 254 – 127 = +127 l Fraction: 111…11  significand ≈ 2.0 l ±2.0 × 2 +127 ≈ ±3.4 × 10 +38

Double-Precision Range  Exponents 0000…00 and 1111…11 reserved  Smallest value l Exponent: 00000000001  actual exponent = 1 – 1023 = –1022 l Fraction: 000…00  significand = 1.0 l ±1.0 × 2 –1022 ≈ ±2.2 × 10 –308  Largest value l Exponent: 11111111110  actual exponent = 2046 – 1023 = +1023 l Fraction: 111…11  significand ≈ 2.0 l ±2.0 × 2 +1023 ≈ ±1.8 × 10 +308

Floating-Point Precision  Relative precision l all fraction bits are significant l Single: approx 2 –23 -Equivalent to 23 × log 10 2 ≈ 23 × 0.3 ≈ 6 decimal digits of precision l Double: approx 2 –52 -Equivalent to 52 × log 10 2 ≈ 52 × 0.3 ≈ 16 decimal digits of precision

Floating-Point Example  Represent –0.75 l –0.75 = (–1) 1 × 1.1 2 × 2 –1 l S = 1 l Fraction = 1000…00 2 l Exponent = –1 + Bias -Single: –1 + 127 = 126 = 01111110 2 -Double: –1 + 1023 = 1022 = 01111111110 2  Single: 1011111101000…00  Double: 1011111111101000…00

Floating-Point Example  What number is represented by the single-precision float 11000000101000…00 l S = 1 l Fraction = 01000…00 2 l Fxponent = 10000001 2 = 129  x = (–1) 1 × (1 + 01 2 ) × 2 (129 – 127) = (–1) × 1.25 × 2 2 = –5.0

(done in class) 11000100110010010011000000000000 Example: Converting Binary FP to Decimal

63.25 Example: Converting Decimal to FP (done in class)

Representation for 0  Represent 0? l exponent all zeroes l significand all zeroes l What about sign? Both cases valid +0: 0 00000000 00000000000000000000000 -0: 1 00000000 00000000000000000000000

Special Numbers  What have we defined so far? (Single Precision) ExponentSignificandObject 0 00 0 nonzero ??? 1-254 anything+/- fl. pt. # 255 0 +/- ∞ 255 nonzero???

Representation for Not a Number  What do I get if I calculate sqrt(-4.0) or 0/0 ? l If ∞ not an error, these shouldn’t be either l Called Not a Number (NaN) l Exponent = 255, Significand nonzero  Why is this useful? l Hope NaNs help with debugging? l They contaminate: op(NaN, X) = NaN

Infinities and NaNs  Exponent = 111...1, Fraction = 000...0 l ±Infinity l Can be used in subsequent calculations, avoiding need for overflow check  Exponent = 111...1, Fraction ≠ 000...0 l Not-a-Number (NaN) l Indicates illegal or undefined result -e.g., 0.0 / 0.0 l Can be used in subsequent calculations

Representation for Denorms  Problem: There’s a gap among representable FP numbers around 0  Normalization and implicit 1 is to blame! (done in class) b a 0 + - Gaps!

Representation for Denorms  Solution: l We still haven’t used Exponent = 0, Significand nonzero l Denormalized number: no (implied) leading 1, implicit exponent = -127 l Smallest representable pos num: a = 2 -150 l Second smallest representable pos num: b = 2 -149 0 + -

Special Numbers  What have we defined so far? (Single Precision) ExponentSignificandObject 000 0nonzeroDenorm 1-254anything+/- fl. pt. # 2550+/- ∞ 255nonzeroNaN

Floating-Point Addition  Consider a 4-digit decimal example l 9.999 × 10 1 + 1.610 × 10 –1  1. Align decimal points l Shift number with smaller exponent l 9.999 × 10 1 + 0.016 × 10 1  2. Add significands l 9.999 × 10 1 + 0.016 × 10 1 = 10.015 × 10 1  3. Normalize result & check for over/underflow l 1.0015 × 10 2  4. Round and renormalize if necessary l 1.002 × 10 2

Floating Point Addition  Addition (and subtraction) (  F1  2 E1 ) + (  F2  2 E2 ) =  F3  2 E3 l Step 0: Restore the hidden bit in F1 and in F2 l Step 1: Align fractions by right shifting F2 by E1 - E2 positions (assuming E1  E2) keeping track of (three of) the bits shifted out in G R and S l Step 2: Add the resulting F2 to F1 to form F3 l Step 3: Normalize F3 (so it is in the form 1.XXXXX …) -If F1 and F2 have the same sign  F3  [1,4)  1 bit right shift F3 and increment E3 (check for overflow) -If F1 and F2 have different signs  F3 may require many left shifts each time decrementing E3 (check for underflow) l Step 4: Round F3 and possibly normalize F3 again l Step 5: Rehide the most significant bit of F3 before storing the result

Floating Point Addition Example  Add (0.5 = 1.0000  2 -1 ) + (-0.4375 = -1.1100  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5:

Floating Point Addition Example  Add (0.5 = 1.0000  2 -1 ) + (-0.4375 = -1.1100  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5: Hidden bits restored in the representation above Shift significand with the smaller exponent (1.1100) right until its exponent matches the larger exponent (so once) Add significands 1.0000 + (-0.111) = 1.0000 – 0.111 = 0.001 Normalize the sum, checking for exponent over/underflow 0.001 x 2 -1 = 0.010 x 2 -2 =.. = 1.000 x 2 -4 The sum is already rounded, so we’re done Rehide the hidden bit before storing

Hardware for Floating-Point Addition Figure 12.5 Simplified schematic of a floating-point adder.

Floating Point Multiplication  Multiplication (  F1  2 E1 ) x (  F2  2 E2 ) =  F3  2 E3 l Step 0: Restore the hidden bit in F1 and in F2 l Step 1: Add the two (biased) exponents and subtract the bias from the sum, so E1 + E2 – 127 = E3 also determine the sign of the product (which depends on the sign of the operands (most significant bits)) l Step 2: Multiply F1 by F2 to form a double precision F3 l Step 3: Normalize F3 (so it is in the form 1.XXXXX …) -Since F1 and F2 come in normalized  F3  [1,4)  1 bit right shift F3 and increment E3 -Check for overflow/underflow l Step 4: Round F3 and possibly normalize F3 again l Step 5: Rehide the most significant bit of F3 before storing the result

Floating Point Multiplication Example  Multiply (0.5 = 1.0000  2 -1 ) x (-0.4375 = -1.1100  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5:

Floating Point Multiplication Example  Multiply (0.5 = 1.0000  2 -1 ) x (-0.4375 = -1.1100  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5: Hidden bits restored in the representation above Add the exponents (not in bias would be -1 + (-2) = -3 and in bias would be (-1+127) + (-2+127) – 127 = (-1 -2) + (127+127-127) = -3 + 127 = 124 Multiply the significands 1.0000 x 1.110 = 1.110000 Normalized the product, checking for exp over/underflow 1.110000 x 2 -3 is already normalized The product is already rounded, so we’re done Rehide the hidden bit before storing

Hardware for Floating-Point Multiplication and Division Figure 12.6 Simplified schematic of a floating- point multiply/divide unit.

Floating Point Examples  Add (0.75) + (-0.375)  Multiplication (0.75) * (-0.375)

MIPS Floating Point Instructions  MIPS has a separate Floating Point Register File ( \$f0, \$f1, …, \$f31 ) (whose registers are used in pairs for double precision values) with special instructions to load to and store from them lwcl \$f1,54(\$s2) #\$f1 = Memory[\$s2+54] swcl \$f1,58(\$s4) #Memory[\$s4+58] = \$f1  And supports IEEE 754 single add.s \$f2,\$f4,\$f6 #\$f2 = \$f4 + \$f6 and double precision operations add.d \$f2,\$f4,\$f6 #\$f2||\$f3 = \$f4||\$f5 + \$f6||\$f7 similarly for sub.s, sub.d, mul.s, mul.d, div.s, div.d

MIPS Floating Point Instructions, Con’t  And floating point single precision comparison operations c.x.s \$f2,\$f4 #if(\$f2 < \$f4) cond=1; else cond=0 where x may be eq, neq, lt, le, gt, ge and double precision comparison operations c.x.d \$f2,\$f4 #\$f2||\$f3 < \$f4||\$f5 cond=1; else cond=0  And floating point branch operations bclt 25 #if(cond==1) go to PC+4+25 bclf 25 #if(cond==0) go to PC+4+25

FP Example: °F to °C  C code: float f2c (float fahr) { return ((5.0/9.0)*(fahr - 32.0)); } fahr in \$f12, result in \$f0, literals in global memory space  Compiled MIPS code: f2c: lwc1 \$f16, const5(\$gp) lwcl \$f18, const9(\$gp) div.s \$f16, \$f16, \$f18 lwcl \$f18, const32(\$gp) sub.s \$f18, \$f12, \$f18 mul.s \$f0, \$f16, \$f18 jr \$ra

FP Example: Array Multiplication  X = X + Y × Z l All 32 × 32 matrices, 64-bit double-precision elements  C code: void mm (double x[][], double y[][], double z[][]) { int i, j, k; for (i = 0; i! = 32; i = i + 1) for (j = 0; j! = 32; j = j + 1) for (k = 0; k! = 32; k = k + 1) x[i][j] = x[i][j] + y[i][k] * z[k][j]; } Addresses of x, y, z in \$a0, \$a1, \$a2, and i, j, k in \$s0, \$s1, \$s2

FP Example: Array Multiplication MIPS code: li \$t1, 32 # \$t1 = 32 (row size/loop end) li \$s0, 0 # i = 0; initialize 1st for loop L1: li \$s1, 0 # j = 0; restart 2nd for loop L2: li \$s2, 0 # k = 0; restart 3rd for loop sll \$t2, \$s0, 5 # \$t2 = i * 32 (size of row of x) addu \$t2, \$t2, \$s1 # \$t2 = i * size(row) + j sll \$t2, \$t2, 3 # \$t2 = byte offset of [i][j] addu \$t2, \$a0, \$t2 # \$t2 = byte address of x[i][j] l.d \$f4, 0(\$t2) # \$f4 = 8 bytes of x[i][j] L3: sll \$t0, \$s2, 5 # \$t0 = k * 32 (size of row of z) addu \$t0, \$t0, \$s1 # \$t0 = k * size(row) + j sll \$t0, \$t0, 3 # \$t0 = byte offset of [k][j] addu \$t0, \$a2, \$t0 # \$t0 = byte address of z[k][j] l.d \$f16, 0(\$t0) # \$f16 = 8 bytes of z[k][j] …

FP Example: Array Multiplication … sll \$t0, \$s0, 5 # \$t0 = i*32 (size of row of y) addu \$t0, \$t0, \$s2 # \$t0 = i*size(row) + k sll \$t0, \$t0, 3 # \$t0 = byte offset of [i][k] addu \$t0, \$a1, \$t0 # \$t0 = byte address of y[i][k] l.d \$f18, 0(\$t0) # \$f18 = 8 bytes of y[i][k] mul.d \$f16, \$f18, \$f16 # \$f16 = y[i][k] * z[k][j] add.d \$f4, \$f4, \$f16 # f4=x[i][j] + y[i][k]*z[k][j] addiu \$s2, \$s2, 1 # \$k k + 1 bne \$s2, \$t1, L3 # if (k != 32) go to L3 s.d \$f4, 0(\$t2) # x[i][j] = \$f4 addiu \$s1, \$s1, 1 # \$j = j + 1 bne \$s1, \$t1, L2 # if (j != 32) go to L2 addiu \$s0, \$s0, 1 # \$i = i + 1 bne \$s0, \$t1, L1 # if (i != 32) go to L1

Accurate Arithmetic  IEEE Std 754 specifies additional rounding control l Extra bits of precision (guard, round, sticky) l Choice of rounding modes l Allows programmer to fine-tune numerical behavior of a computation  Not all FP units implement all options l Most programming languages and FP libraries just use defaults  Trade-off between hardware complexity, performance, and market requirements

Problem  Calculate the sum of A and B assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = -1.278 x 10 3  B = 3.90625 x 10 -1

Problem  Calculate the product of A and B assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = 5.66015625 x 10 0  B = 8.59375 x 10 0

Associativity  Parallel programs may interleave operations in unexpected orders l Assumptions of associativity may fail

Problem  Calculate (A+B)+C and A+(B+C) assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = 2.865625 x 10 1  B = 4.140625 x 10 -1  C = 1.2140625 x 10 1

Download ppt "CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson."

Similar presentations