Spring 2002EECS150 - Lec13-proj Page 1 EECS150 - Digital Design Lecture 13 - Final Project Description March 7, 2002 John Wawrzynek.

Slides:



Advertisements
Similar presentations
Int 2 Multimedia Revision. Digitised Sound Analogue sound recorded from person, or real instruments.
Advertisements

CMPS1371 Introduction to Computing for Engineers PROCESSING SOUNDS.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
Basic Computer 101 and Basic Digital Audio basic is a relative term.
What makes a musical sound? Pitch n Hz * 2 = n + an octave n Hz * ( …) = n + a semitone The 12-note equal-tempered chromatic scale is customary,
EE2F2 - Music Technology 9. Additive Synthesis & Digital Techniques.
The frequency spectrum
EE2F2: Music Technology - Revision Two exam questions Music Recording Technology Mixing & multi-track recording Effects MIDI & Sequencers Virtual Studio.
CS Fall Final Project Final Project Report and Grading Randy H. Katz Computer Science Division Electrical Engineering and Computer Science.
Copyright © 2011 by Denny Lin1 Simple Synthesizer Part 4 Based on Floss Manuals (Pure Data) “Building a Simple Synthesizer” By Derek Holzer Slides by Denny.
CSE466 Autumn ‘00- 1 System Functional Requirements  Children’s toy…comes with PC software. Child plays notes on the screen and the device makes corresponding.
BPC: Art and Computation – Summer 2007 Digital Media – Audio, part 2 Robert Putnam
PH 105 Dr. Cecilia Vogel Lecture 13. OUTLINE  Timbre and graphs:  Time graph  Spectrum graph  Spectrogram  Envelope  scales  units  interval factors.
Customizable Audio Kaleidoscope Agustya Mehta, Dennis Ramdass, Tony Hwang Final Project Spring 2007.
EET 450 Chapter 18 – Audio. Analog Audio Sound is analog Consists of air pressure that has a variety of characteristics  Frequencies  Amplitude (loudness)
PH 105 Dr. Cecilia Vogel Lecture 24. OUTLINE  Electronic music  Theramin  Analog vs digital  components in electronic music.
EE2F2 - Music Technology 10. Sampling Early Sampling It’s not a real orchestra, it’s a Mellotron It works by playing tape recordings of a real orchestra.
Music Processing Roger B. Dannenberg. Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding.
EE2F2 - Music Technology 5. MIDI. A Musical Interface Early synthesisers were often modular designs Sounds were built up by patching together several.
Human Psychoacoustics shows ‘tuning’ for frequencies of speech If a tree falls in the forest and no one is there to hear it, will it make a sound?
5. Multimedia Data. 2 Multimedia Data Representation  Digital Audio  Sampling/Digitisation  Compression (Details of Compression algorithms – following.
Chapter 14 Recording and Editing Sound. Getting Started FAQs: − How does audio capability enhance my PC? − How does your PC record, store, and play digital.
Digital Audio Multimedia Systems (Module 1 Lesson 1)
2 Outline Digital music The power of FPGA The “DigitalSynth” project –Hardware –Software Conclusion Demo.
MIDI One choice for adding sounds to multimedia applications is the use of digital audio soundfiles This can become very memory intensive, however, for.
Microelectronic Systems--University of Tennessee 1 1 Music Synthesizer Design Christopher Boyd Ki Shin Electrical & Computer Engineering University of.
infinity-project.org Engineering education for today’s classroom 53 Design Problem - Digital Band Build a digital system that can create music of any.
Making a MIDI Keyboard Bob Wayne Bell, Jr. Nov. 15, 2002 EE281.
A Brief Exploration of Electronic Music and its Theory By: Zac Changnon.
Digital Sound and Video Chapter 10, Exploring the Digital Domain.
Introduction to Interactive Media 10: Audio in Interactive Digital Media.
Radio production worktext.  Analog to Digital  Analog signal – continuously variable electrical signal whose shape is determined by the shape of the.
MIDI. A protocol that enables computers, synthesizers, keyboards, and other musical devices to communicate with each other. Instead of storing actual.
Multimedia Technology Digital Sound Krich Sintanakul Multimedia and Hypermedia Department of Computer Education KMITNB.
EE 319K Introduction to Microcontrollers
ACOUSTICS AND THE ELEMENTS OF MUSIC Is your name and today’s date at the top of the worksheet now?
MIDI and YOU Orchestra in a Box. What is MIDI?  Musical Instrument Digital Interface  MIDI is a protocol - a standard by which two electronic instruments.
COMP Representing Sound in a ComputerSound Course book - pages
A Breath in an Electronic World: Experiments in Musical Expression using a Midi Wind Controller Matthew Ahrens Mentor: Dr. James Bohn Bridgewater State.
COSC 1P02 Introduction to Computer Science 4.1 Cosc 1P02 Week 4 Lecture slides “Programs are meant to be read by humans and only incidentally for computers.
Sound on the Web. Using Sound on a Web Site Conveying information  pronounce a word or describe a product Set a mood  music to match the web page scene.
Overview of Multimedia A multimedia presentation might contain: –Text –Animation –Digital Sound Effects –Voices –Video Clips –Photographic Stills –Music.
Wave Superposition & Timbre
Multimedia Technology and Applications Chapter 2. Digital Audio
Chapter 15 Recording and Editing Sound. 2Practical PC 5 th Edition Chapter 15 Getting Started In this Chapter, you will learn: − How sound capability.
CMSCDHN1114/CMSCD1011 Introduction to Computer Audio
Introduction to SOUND.
Sound and Waves Integrated Science. Sound Waves Description  Light waves are transverse waves.  Sound waves are longitudinal waves.
Sound element Week - 11.
Multimedia ITGS. Multimedia Multimedia: Documents that contain information in more than one form: Text Sound Images Video Hypertext: A document or set.
Audio / Sound INTRODUCTION TO MULTIMEDIA SYSTEMS Lect. No 3: AUDIO TECHNOLOGY.
Chapter 5: Electronic Music and Synthesizers Who uses electronic musical synthesizers? Each advance in electronic technology is followed by a concomitant.
MULTIMEDIA INPUT / OUTPUT TECHNOLOGIES INTRODUCTION 6/1/ A.Aruna, Assistant Professor, Faculty of Information Technology.
Sound Representation Digitizing Sound Sound waves through Air Different Voltages Voltage converted to numbers.
Audio Technology introduction Iwan Sonjaya,MT What is sound? Sound is a physical phenomenon caused by vibration of material (ex.: violin). As the matter.
Copyright 2004 Ken Greenebaum Introduction to Interactive Sound Synthesis Lecture 18:Noise Ken Greenebaum.
Copyright 2004 Ken Greenebaum Introduction to Interactive Sound Synthesis Lecture 14: Envelopes Ken Greenebaum.
1 Sound in Java Summary: r Sound API Basics r MIDI JavaSound - Part of the UIT User Interface Toolkit.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 3 – Digital Audio Representation Klara Nahrstedt Spring 2014.
The Physics Of Sound Why do we hear what we hear? (Turn on your speakers)
A Brief Introduction to Musical Acoustics
1. What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action.
SOUNDS RECORDING AND REPRODUCTION The Volume of the Wave n The Amplitude is a measure of volume n The wave pink is softer than the blue wave. n It represents.
Lecture # 23 Sound Synthesis & Sound Arithmetic A “ sound font ” dog Jane knows Fred Ralph the loves Jane lovesFred.
AUDIO Audio means "of sound" or "of the reproduction of sound". Specifically, it refers to the range of frequencies detectable by the human ear ム approximately.
EE2F2: Music Technology - Revision
Laser Harp Team: Peter Crinklaw Qiushi Jiang Edwin Rodriguez.
CSC 320 – Music Instrument Digital Interface (MIDI) and Digital Audio, Spring 2017 April 2017 Dr. Dale Parson.
Pitch.
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

Spring 2002EECS150 - Lec13-proj Page 1 EECS150 - Digital Design Lecture 13 - Final Project Description March 7, 2002 John Wawrzynek

Spring 2002EECS150 - Lec13-proj Page 2 Project Everyone will design, debug, and demonstrate a Music Synthesizer Operation based on principle of waveform synthesis or sampling. –Sounds from recordings of real musical instruments are stored in memory then pitch-shifted and played back in response to note commands. 1 or 2 partners/group

Spring 2002EECS150 - Lec13-proj Page 3 Digital Waveforms In digital systems waveforms are represented as a series of numbers, rather than a voltage or current, as in analog systems. –Example: sound waveform –Sound can be produced by sending series of numbers to Digital to Analog converter, then to Amplifier, then to speaker. –In principle any sound can be produced. Sampling rate: 31.25KHz 16-bits per sample

Spring 2002EECS150 - Lec13-proj Page 4 Interfaces MIDI: Musical Instrument Digital Interface. –Commands are sent from a keyboard (or computer) to control the synthesizer. Waveforms are stored in the ROM (read only memory). Monophonic: one voice at a time.

Spring 2002EECS150 - Lec13-proj Page 5 Theory of Sound and Music Air vibrating in the frequency range of 20Hz to 20KHz is perceived as sound. The three important characteristics of perceived sound are: –loudness (relates to amplitude) –pitch (relates to frequency) –timbre (relates to shape) Human hearing is approximately logarithmic in perceiving loudness and pitch: –we perceive the loudness as being prop to the log of the sound wave amplitude

Spring 2002EECS150 - Lec13-proj Page 6 Timbre tone quality or “color” Different instruments have different timbres. We perceive timbre based on how a note begins, repeats, and ends. For many instruments a simple model can be used to represent the shape of the waveform. –Attack, sustain, release (decay) Works best for “driven” instruments: woodwinds, brass, bowed strings. Pluck and struck instruments don’t have the “sustain” [picture]

Spring 2002EECS150 - Lec13-proj Page 7 Pitch Middle C has frequency of Hz. –MIDI encoding of “60” High C (an octave above middle C) has frequency Hz –MIDI encoding of “72” Other tones can be produced by multiplying and dividing the frequency by factors of 12 th root of 2. Pitch: –12 semi-tones form the chromatic scale of the western scale. –To move from one up to the next: freq next = freq * 12 th root of 2. –After 12 such multiplications we will have doubled the freq and reached the octave. Most people can detect pitch differences as small as a few hundreds of a semi-tone (or a few times the 1200 th root of 2)!

Spring 2002EECS150 - Lec13-proj Page 8

Spring 2002EECS150 - Lec13-proj Page 9 Playing Notes ROM used to store notes has limited capacity. For notes with “sustain” portion, would like to vary note duration

Spring 2002EECS150 - Lec13-proj Page 10 Pitch Shifting

Spring 2002EECS150 - Lec13-proj Page 11 Pitch Shifting

Spring 2002EECS150 - Lec13-proj Page 12 Linear Interpolation

Spring 2002EECS150 - Lec13-proj Page 13 ROM (EEPROM) Layout One stored note per instrument is never enough. –Timbre varies from note to note over the range of the instrument. ROM holds directory with one entry per MIDI note number. Entry holds pointer to note “template” and “step size”. Note step sizes are precomputed (synthesizer does not need to do 12 th root of 2 calculation. One instrument per ROM (might change this later).

Spring 2002EECS150 - Lec13-proj Page 14 Directory Entry Layout 20-bit template pointer 12 bits of “step size”

Spring 2002EECS150 - Lec13-proj Page 15 Template Layout

Spring 2002EECS150 - Lec13-proj Page 16 Instrument Template Files We will provide you with template files and a program for converting these to EPROM format. We also have programs for taking instrument samples from standard file formats and converting them to our template format. You are encouraged to generate your own template files and EPROMs. We might extend the format (and project) to allow for more than one instrument per EPROM. –Switch among the instruments either through dip switches or MIDI commands.

Spring 2002EECS150 - Lec13-proj Page 17 High-level Block Diagram This is only a suggestion. Your organization is up to you. FSM and datapath for each block.

Spring 2002EECS150 - Lec13-proj Page 18 Checkpoints 3/11UART Design and Test 3/18ROM Interfacing 3/25Recess 4/1MIDI Interface 4/1Audio Stage 4/8Monotone Notes 4/15Notes of Arbitrary Frequency 4/22Velocity Sensitivity 4/29Spare 5/6Final Checkoff Your are strongly suggested to work ahead. These are only minimum requirements. Completion of checkpoints are part of your project grade. Project spec document online today. All checkpoint write-ups available in next couple days (in draft form).

Spring 2002EECS150 - Lec13-proj Page 19 Extra Credit Early Final Checkoff. –1 week or more Low CLB Count. –“low” quantified later Interpolation –add linear interpolation for sample lookup Polyphony. –The ability to play multiple keys at once Velocity Sensitive Template Lookup. –Index templates not only on key number but also velocity. Extra credit only considered for fully functional designs. Point assignment announced later. Usually in the 15% range.

Spring 2002EECS150 - Lec13-proj Page 20

Spring 2002EECS150 - Lec13-proj Page 21 Connections

Spring 2002EECS150 - Lec13-proj Page 22

Spring 2002EECS150 - Lec13-proj Page 23

Spring 2002EECS150 - Lec13-proj Page 24 Note On

Spring 2002EECS150 - Lec13-proj Page 25 Note off

Spring 2002EECS150 - Lec13-proj Page 26 Keyboard Mapping

Spring 2002EECS150 - Lec13-proj Page 27

Spring 2002EECS150 - Lec13-proj Page 28 “Running Status” The MIDI standard convention allows a transmitter to compress the data stream by dropping status bytes. A command without a status byte implicitly uses whatever status byte was most recently sent. Therefore a keyboard can send a sequence of note-on and note-off commands only the first of which having a status byte. Your synthesizer must conform to the running status convention.