DCSP-6: Signal Transmission + information theory Jianfeng Feng Department of Computer Science Warwick Univ., UK

Slides:



Advertisements
Similar presentations
Physical Layer: Signals, Capacity, and Coding
Advertisements

DCSP-10 Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-2: Fourier Transform I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-4: Modem Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-7: Information Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-3: Fourier Transform Jianfeng Feng Department of Computer Science Warwick Univ., UK
Signal Encoding Techniques
Lecture 2: Basic Information Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Entropy and Information Theory
1 Computer Networks and Internets, 5e By Douglas E. Comer Lecture PowerPoints Adapted from the notes By Lami Kaya, © 2009 Pearson Education.
Information Theory EE322 Al-Sanie.
Chapter 2 Fundamentals of Data and Signals
Local Asynchronous Communication and RS-232. Goals Explain how electric current can be used to transmit bits over short distances Present a popular mechanism.
Chapter Two Fundamentals of Data and Signals
4.2 Digital Transmission Pulse Modulation (Part 2.1)
Csc333 Data communication & Networking Credit: 2.
DIGITAL COMMUNICATIONS.  The modern world is dependent on digital communications.  Radio, television and telephone systems were essentially analog in.
Digital Data Transmission ECE 457 Spring Information Representation Communication systems convert information into a form suitable for transmission.
Chapter Two Fundamentals of Data and Signals Data Communications and Computer Networks: A Business User's Approach Fifth Edition.
CS335 Networking & Network Administration Tuesday, April 6.
Chapter 2 Fundamentals of Data and Signals
CSCI 4550/8556 Computer Networks Comer, Chapter 5: Local Asynchronous Communication (RS-232)
Network Data Organizational Communications and Technologies Prithvi N. Rao Carnegie Mellon University Web:
Chapter 2: Fundamentals of Data and Signals. 2 Objectives After reading this chapter, you should be able to: Distinguish between data and signals, and.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 4 Digital Transmission.
1 Chapter 2 Fundamentals of Data and Signals Data Communications and Computer Networks: A Business User’s Approach.
Chapter 8 COMMUNICATION AND COMPUTER NETWORK
Local Asynchronous Communication
331: STUDY DATA COMMUNICATIONS AND NETWORKS.  1. Discuss computer networks (5 hrs)  2. Discuss data communications (15 hrs)
331: STUDY DATA COMMUNICATIONS AND NETWORKS.  1. Discuss computer networks (5 hrs)  2. Discuss data communications (15 hrs)
Base-Band Digital Data Transmission Prepared By: Amit Degada. Electronics Engineering Department, Sardar Vallabhbhai National Institute of Technology,
Chapter 2 Basic Communication Theory Basic Communications Theory w Understand the basic transmission theory, and figure out the maximum data rate. w.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
Data Communications & Computer Networks, Second Edition1 Chapter 2 Fundamentals of Data and Signals.
Introduction to Data communication
CS3502, Data and Computer Networks: the physical layer-1.
Line Coding, Modem, RS232 interfacing sequences.
Chapter 5: Local Asynchronous Communication 1. Bit-wise data transmission 2. Asynchronous communication 3. Sending bits with electric current 4. Standard.
Computer Communication & Networks Lecture # 05 Physical Layer: Signals & Digital Transmission Nadeem Majeed Choudhary
Channel Capacity.
3-2008UP-Copyrights reserved1 ITGD4103 Data Communications and Networks Lecture-11:Data encoding techniques week 12- q-2/ 2008 Dr. Anwar Mousa University.
EEC4113 Data Communication & Multimedia System Chapter 2: Baseband Encoding by Muhazam Mustapha, July 2010.
DCSP-8: Minimal length coding I Jianfeng Feng Department of Computer Science Warwick Univ., UK
Introduction to Digital and Analog Communication Systems
Lecture 2 Outline Announcements: No class next Wednesday MF lectures (1/13,1/17) start at 12:50pm Review of Last Lecture Analog and Digital Signals Information.
Information Theory The Work of Claude Shannon ( ) and others.
© 2009 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.1 Computer Networks and Internets, 5e By Douglas E. Comer Lecture PowerPoints.
Unit 1 Lecture 6 1. Different Conversion/Transmission Schemes 2 Before we discuss various line coding schemes, let us first have an idea of different.
Chapter : Digital Modulation 4.2 : Digital Transmission
FUNDAMENTALS OF NETWORKING
Unit 1 Lecture 4.
Chapter Two Fundamentals of Data and Signals Data Communications and Computer Networks: A Business User's Approach Eighth Edition.
Line Coding and Binary Keying Modulation
CSCI 465 Data Communications and Networks Lecture 8 Martin van Bommel CSCI 465 Data Communications and Networks 1.
Data Communications and Computer Networks Chapter 1 Network Architecture Models Logical and physical connections.
1 CSCD 433 Network Programming Fall 2013 Lecture 5a Digital Line Coding and other...
Chapter Nine: Data Transmission. Introduction Binary data is transmitted by either by serial or parallel methods Data transmission over long distances.
1587: COMMUNICATION SYSTEMS 1 Digital Signals, modulation and noise Dr. George Loukas University of Greenwich,
1 CSCD 433 Network Programming Fall 2016 Lecture 4 Digital Line Coding and other...
Serial Communications
8 Coding Theory Discrete Mathematics: A Concept-based Approach.
6.1 Communication Methods
Local Asynchronous Communication (RS-232)
DIGITAL DATA COMMUNICATION TECHNIQUES
Physical Layer – Part 2 Data Encoding Techniques
Chapter Nine: Data Transmission
Physical Layer – Part 2 Data Encoding Techniques
EEC4113 Data Communication & Multimedia System Chapter 2: Baseband Encoding by Muhazam Mustapha, September 2012.
Serial Communications
Presentation transcript:

DCSP-6: Signal Transmission + information theory Jianfeng Feng Department of Computer Science Warwick Univ., UK

How to deal with noise? How to transmit signals?

Errors in digital communication We noted earlier that one of the most important advantages of digital communications is that it permits very high fidelity. We shall consider in detail only BPSK systems, and comment on the alternative modulations. In the absence of noise, the signal V, from a BPSK system can take one of two values + or - v b. In the ideal case, if the signal is greater than 0, the value is assigned 1.

If the signal is less than 0, the value that is read is assigned 0. When noise is present, this distinction between + and - v b (with the threshold at 0 becomes blurred). There is a finite probability of the signal dropping below 0, and thus being assigned 0, even through a 1 was transmitted when this happens, we say that a bit-error has occurred. The probability that a bit-error will occur in a given time is referred to as the bit-error rate (BER)

We suppose that the signal V, which has the signal levels + or – v b with noise N of variance. The probability that an error will occur in the transmission of a 1 is

1 1+N

It is usual to write these expressions in terms of the ratio of (energy per bit) to (nose power per unit Hz), En. The power S in the signal is, on average (v b ) 2 and the total energy in the signaling period T is (v b ) 2 T.

\

Timing control In additional to providing the analogue modulation and demodulation functions, digital communication also requires timing control. It is necessary to introduce a clock in signal transmission is obvious if we look at Fig.

Timing control is required to identify the rate at which bits are transmitted and to identify the start and end of each bit. This permits the receiver to correctly identify each bit in the transmitted message. Bits are never sent individually.

They are grouped together in segments, called blocks. A block is the minimum segment of data that can be sent with each transmission. Usually, a message will contain many such blocks.

Each block is framed by binary characters identifying the start and end of the block. The type of method used depends on the source of the timing information. If the timing in the receiver is generated by the receiver, separately from the transmitter, the transmission is termed asynchronous. If the timing is generated, directly or indirectly, from the transmitter clock the transmission is termed synchronous.

Asynchronous transmission is used for low data- rate transmission and stand-alone equipment. We will not discuss it in detail here. Synchronous transmission is used for high data rate transmission. The timing is generated by sending a separate clock signal, or embedding the timing information into the transmission. This information is used to synchronize the receiver circuitry to the transmitter clock.

Synchronous receivers require a timing signal from the transmitter. An additional channel may be sued in the system to transmit the clock signal. This is wasteful of bandwidth, and it is more customary to embed the timing signal within the transmitted data stream by use of suitable encoding (self-clocking encoding).

Bipolar Coding A binary 0 is encoded as zero volts A binary 1 is encoded alternately as a positive voltage and a negative voltage.

Ethernet

Manchester code (also known as Phase Encoding, or PE) is a form of data communications in which each bit of data is signified by at least one voltage level transition. Manchester encoding is therefore considered to be self-clocking, which means that accurate synchronization of a data stream is possible. Each bit is transmitted over a predefined time period.

Summary: Data and clock signals are combined to form a single self-synchronizing data stream each encoded bit contains a transition at the midpoint of a bit period the direction of transition determines whether the bit is a "0" or a "1," and the first half is the true bit value and the second half is the complement of the true bit value.

Applications Data transmission is the conveyance of any kind of information from one space to another. Historically this could be done by courier, a chain of bonfires, and later by Morse code over copper wires. In recent computer terms, it means sending a stream of bits or bytes from one location to another using any number of technologies, such as copper wire, optical fiber, laser, radio, or infra-red light. Practical examples include moving data from one data storage device to another such as accessing a website, which involves data transfer from web servers to a user's browser. A related concept to data transmission is the data transmission protocol used to make the data transfer legible.

Protocol A protocol is an agreed-upon format for transmitting data between two devices e.g.: computer and printer. All communications between devices require that the devices agree on the format of the data. The set of rules defining a format is called a protocol. The protocol determines the following: the type of error checking to be used if any e.g.: Check digit (and what type/ what formula to be used) data compression method, if any e.g.: Zipped files if the file is large, like transfer across the Internet, LANs and WANs.

how the sending device will indicate that it has finished sending a message, e.g.: in a a spare wire would be used, for serial (USB) transfer start and stop digits maybe used. how the receiving device will indicate that it has received a message rate of transmission (in baud or bit rate) whether transmission is to be synchronous or asynchronous In addition, protocols can include sophisticated techniques for detecting and recovering from transmission errors and for encoding and decoding data.

Introduction Fourier Transform I Fourier Transform II ASK, FSK, and PSK Noise Signal Transmission Week 2 Week 3 Week 4

Data transmission: Channel characteristics, signalling methods, interference and noise, synchronisation, data compression and encryption;

Data transmission: Information Sources and Coding: Information theory, coding of information for efficiency and error protection;

Information and coding theory Information theory is concerned with the description of information sources, the representation of the information from a source, and the transmission of this information over channel.

Information and coding theory Information theory is concerned with the description of information sources, the representation of the information from a source, and the transmission of this information over channel. This might be the best example to demonstrate how a deep mathematical theory could be successfully applied to solving engineering problems.

Information theory is a discipline in applied mathematics involving the quantification of data with the goal of enabling as much data as possible to be reliably stored on a medium and/or communicated over a channel.

The measure of data, known as information entropy, is usually expressed by the average number of bits needed for storage or communication.

The field is at the crossroads of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering.

Its impact has been crucial to success of the voyager missions to deep space, the invention of the CD, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields.

Information theory is generally considered to have been founded in 1948 by Claude Shannon in his seminal work, A Mathematical Theory of Communication

The central paradigm of classic information theory is the engineering problem of the transmission of information over a noisy channel. An avid chess player, Professor Shannon built a chess-playing computer years before IBM's Deep Blue came along. While on a trip to Russia in 1965, he challenged world champion Mikhail Botvinnik to a match. He lost in 42 moves, considered an excellent showing.

The most fundamental results of this theory are 1.Shannon's source coding theorem which establishes that, on average, the number of bits needed to represent the result of an uncertain event is given by its entropy;

The most fundamental results of this theory are 1.Shannon's source coding theorem which establishes that, on average, the number of bits needed to represent the result of an uncertain event is given by its entropy; 2. Shannon's noisy-channel coding theorem which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold called the channel capacity. The channel capacity can be approached by using appropriate encoding and decoding systems.

Consider to predict the activity of Prime minister tomorrow. This prediction is an information source. The information source has two outcomes: He will be in his office, he will be naked and run 10 miles in London.

Clearly, the outcome of 'in office' contains little information; it is a highly probable outcome. The outcome 'naked run', however contains considerable information; it is a highly improbable event.

In information theory, an information source is a probability distribution, i.e. a set of probabilities assigned to a set of outcomes. "Nothing is certain, except death and taxes"

In information theory, an information source is a probability distribution, i.e. a set of probabilities assigned to a set of outcomes. "Nothing is certain, except death and taxes" This reflects the fact that the information contained in an outcome is determined not only by the outcome, but by how uncertain it is. An almost certain outcome contains little information. A measure of the information contained in an outcome was introduced by Hartley in 1927.

He defined the information contained in an outcome x a I(x) = - log 2 p(x)

He defined the information contained in an outcome x a I(x) = - log 2 p(x) This measure satisfied our requirement that the information contained in an outcome is proportional to its uncertainty. If P(x)=1, then I(x)=0, telling us that a certain event contains no information

The definition above also satisfies the requirement that the total information in in dependent events should add.

Clearly, our prime minister prediction for two days contain twice as much information as for one day.

The definition above also satisfies the requirement that the total information in in dependent events should add. Clearly, our prime minister prediction for two days contain twice as much information as for one day. For two independent outcomes x i and x j, I(x i and x j ) = -log P(x i and x j ) = -[log P(x i ) P(x j )] =

The definition above also satisfies the requirement that the total information in in dependent events should add. Clearly, our prime minister prediction for two days contain twice as much information as for one day. For two independent outcomes x i and x j, I(x i and x j ) = log P(x i and x j ) = log P(x i ) P(x j ) = Hartley's measure defines the information in a single outcome.

The measure entropy H(X) defines the information content of the course X as a whole. It is the mean information provided by the source. We have H(X)= i P(x i )I(x i ) = - i P(x i ) log 2 P(x i ) A binary symmetric source (BSS) is a source with two outputs whose probabilities are p and 1-p respectively.

The prime minister discussed is a BSS. The entropy of the source is H(X) = -p log 2 p - (1-p) log 2 (1-p)

The function takes the value zero when p=0. When one outcome is certain, so is the other, and the entropy is zero. As p increases, so too does the entropy, until it reaches a maximum when p = 1-p = 0.5. When p is greater than 0.5, the curve declines symmetrically to zero, reached when p=1.

We conclude that the average information in the BSS is maximised when both outcomes are equally likely. The entropy is measuring the average uncertainty of the source. (The term entropy is borrowed from thermodynamics. There too it is a measure of the uncertainly of disorder of a system).

In classical thermodynamics, the concept of entropy is defined phenomenologically by the second law of thermodynamics, which states that the entropy of an isolated system always increases or remains constant

My greatest concern was what to call it. I thought of calling it information, but the word was overly used, so I decided to call it uncertainty. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage. John von Neumann