Presentation is loading. Please wait.

Presentation is loading. Please wait.

5th Meeting “BITS AND BYTES” BIT A bit or binary digit is the basic unit of information in computing and telecommunications. It is the amount of information.

Similar presentations


Presentation on theme: "5th Meeting “BITS AND BYTES” BIT A bit or binary digit is the basic unit of information in computing and telecommunications. It is the amount of information."— Presentation transcript:

1

2 5th Meeting “BITS AND BYTES”

3 BIT A bit or binary digit is the basic unit of information in computing and telecommunications. It is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states. These may be the two stable positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, etc. Indeed, the term "bit" is a contraction of binary digit. In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

4 Multiples of bits

5 There are several units of information which are defined as multiples of bits, such as byte (8 bits), kilobit (either 1000 or 2 10 = 1024 bits), megabyte (either 8000000 or 8×2 20 = 8388608bits), etc. The symbol for binary digit should be "bit", and this should be used in all multiples, such as "kbit" (for kilobit). However, the letter "b" (in lower case) is widely used too. The letter "B" (upper case) is both the standard and customary symbol for byte. `

6 BYTE The byte, coined from "bite", but respelled to avoid accidental mutation to "bit", is a unit of digital information in computing and telecommunications. It is an ordered collection of bits, in which each bit denotes the binary value of 1 or 0. The size of a byte is typically hardware dependent, but the modern de facto standard is eight bits, as this is a convenient power of two. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The byte size and byte addressing are often used in place of longer integers for size or speed optimizations in microcontrollers and CPUs.

7 Unit Symbol Or Abbreviation Common Uses The byte is also defined as a data type in certain programming languages. The C and C++ programming languages, for example, define byte as an "addressable unit of data large enough to hold any member of the basic character set of the execution environment".

8 Prefixes for bit and byte multiples Unit symbol or abbreviation

9 Group Discussion 1.How many digits does a binary system use? 2.What is the difference between binary notation and the decimal system? Give some examples. 3.What is a collection of eight bits called? 4.One kilobyte (1K) equals 1,024 bytes. Can you work out the value of these units? (kilo-: one thousand). 1 megabyte =........ Bytes/1,024 kilobytes (mega-:one million). 1 gigabyte =.........bytes/1,024 megabytes (giga-: onethousand million). 5. What does the acronym ‘ASCII’ stand for? What is the purpose of this code?


Download ppt "5th Meeting “BITS AND BYTES” BIT A bit or binary digit is the basic unit of information in computing and telecommunications. It is the amount of information."

Similar presentations


Ads by Google