Download presentation
Presentation is loading. Please wait.
1
1 Dong Lu, Peter A. Dinda Prescience Laboratory Computer Science Department Northwestern University http://www.cs.northwestern.edu/~donglu Virtualized Audio: A Highly Adaptive Interactive High Performance Computing Application http://www.cs.northwestern.edu/~pdinda
2
2 Overview Virtualized Audio: Immersive, listener-centric audio system based on high performance computing User-driven HPC exposes new challenges How to exploit many adaptation mechanisms to achieve responsiveness Concepts and initial results introduced here
3
3 Outline Limitations of traditional audio Virtualized audio Interactive source separation and auralization Structure of interactive auralization Adaptation mechanisms Initial performance evaluation Conclusions
4
4 Performer Microphones Performance Room Mixer Amp Listening Room Listener Sound Field 1 Sound Field 2 Loudspeakers Headphones Traditional Audio
5
5 Performer Microphones Performance Room Mixer Amp Listening Room Listener Sound Field 1 Sound Field 2 Loudspeakers Headphones Limitations of Traditional Audio Microphones capture performance room as well as performer Mixing process destroys recorded information
6
6 Virtualized Audio: Source Separation Performer Microphones Performance Room Separation Sound Field 1 Performer Recording process results in only the performer Not currently implemented, not the subject of this talk
7
7 Performer Microphones Performance Room Mixer Amp Listening Room Listener Sound Field 1 Sound Field 2 Loudspeakers Headphones Limitations of Traditional Audio Playback ignores listening room and listener Playback does not adjust as listener moves
8
8 Virtualized Audio: Interactive Auralization Listener at Virtual Location Headphones Auralization Sound Field 2 Virtual Performer HRTF ListenerPerformerRoom Virtual Listening Room Auralization injects performer into listener’s space Auralization adapts as listener moves or room changes Subject of this talk
9
9 Architecture of Interactive Auralization Client Scalable Real-time Simulation Server Master filtering server Mixing server Filtering server Streaming Audio Service Source 1 Source 2 Source 3 Source 4 Filtering server Source n Filter configuration Left Channel Right Channel Scalable Audio Filtering Service Parallel FD Simulation Filter generation Binaural Audio Output Current Spatial Model and source/sink positions User-driven Immersive Audio Client Impulse response filters characterize listening room
10
10 Architecture of Interactive Auralization Client Scalable Real-time Simulation Server Master filtering server Mixing server Filtering server Streaming Audio Service Source 1 Source 2 Source 3 Source 4 Filtering server Source n Filter configuration Left Channel Right Channel Scalable Audio Filtering Service Parallel FD Simulation Filter generation Binaural Audio Output Current Spatial Model and source/sink positions User-driven Immersive Audio Client
11
11 Finite Difference Simulation of Wave Equation Compute impulse response by injecting impulse and then iterating simulation “snap fingers and record” Captures nuances by simulating the physics Stencil computation on distributed array 2 p/ 2 t = 2 p/ 2 x + 2 p/ 2 y + 2 p/ 2 z
12
12 Simulation Server Simple stateless request/response protocol Block-distributed simulation arrays Extensible/Modifiable Built with C++ and PVM
13
13 Computation requirements 8x6x3 meter room 2 second impulse response O(xyz(kf) 4 t/c 3 ) Current Resource Limit
14
14 Adaptation Mechanisms for Simulation Service O(xyz(kf) 4 t/c 3 ) stencil operations f=peak frequency to be resolved x,y,z=dimensions of simulated space k=grid points per wavelength (2..10 typical) c=speed of sound in medium t=length of the impulse response Peak frequency f is key “knob” Impulse response length t Server or site selection Traditional load-balancing
15
15 Adaptation Mechanisms for Filtering Service O((kf) 2 t) ops/second per stream Using impulse response as FIR filter Peak frequency f is key “knob” Impulse response length t IIR approximations for impulse response filter Server or site selection
16
16 Simulation Server Evaluation Scalability Appropriateness of SMP Initial results on server selection
17
17 Experimental Environment (Cluster) 8 nodes (16 processors) –Dual 866 MHz Pentium 3 –1 GB RAM –RH Linux 7.1 Switched gigabit Ethernet
18
18 Simulation Server Scales Well to 16 Processors
19
19 Efficiency Is Reasonable
20
20 SMP Is Useful (Not Memory-limited)
21
21 Server Selection Experiments Choose from several sequential servers Small problem size –500 Hz, 8x6x3, 2 seconds –~15 second task Four server selection algorithms –Random –Load measurement –Load prediction –Real-time Scheduling Advisor (RTSA) RPS http://www.cs.northwestern.edu/~RPS
22
22 Evaluation Methodology 100 repetitions, random arrivals Host load trace playback for dynamic load –Traces from production PSC cluster Metrics: mean and variance of task slowdown –Seek to minimize both
23
23 Experiment 0: No Challenge Choose from 4 hosts with no load SchedulerMean Slowdown StdDev Slowdown Random1.000.0010 Load Measure1.010.013 Load Predict1.010.015 RTSA1.010.015 All algorithms have low overhead
24
24 Experiment 1: Static Challenge 2 hosts with no load, 2 with high static load SchedulerMean Slowdown StdDev Slowdown Random1.640.49 Load Measure1.010.011 Load Predict1.010.014 RTSA1.010.012 All algorithms respond well with to static load challenge
25
25 Experiment 2: Dynamic Challenge SchedulerMean Slowdown StdDev Slowdown Random1.440.30 Load Measure1.260.14 Load Predict1.140.09 RTSA1.090.12 1 host with high dynamic load, 1 with low dynamic load Prediction leads to enhanced performance here Challenging case but resource are often available
26
26 Experiment 3: More dynamic load SchedulerMean Slowdown StdDev Slowdown Random1.380.42 Load Measure1.140.097 Load Predict1.130.090 RTSA1.140.096 4 hosts, each with different low to high dynamic load All algorithms respond well
27
27 Experiment 4: All Dynamic High Load SchedulerMean Slowdown StdDev Slowdown Random1.720.25 Load Measure1.600.27 Load Predict1.620.23 RTSA1.640.29 4 hosts, each with high dynamic load Algorithms behave similarly Most challenging scenario – few resource available
28
28 Conclusion & Future Work Introduced Virtualized Audio as an HPC application Described application structure Identified adaptation mechanisms Evaluated scalability of one component Showed early server selection results Future Work –Dynamic load balancing of simulation service in non-dedicated environments and Grids –Dynamic load balancing with real-time constraints –Continue development of application
29
29 For More Information http://www.cs.northwestern.edu/~donglu http://www.cs.northwestern.edu/~pdinda Resource Prediction System (RPS) Toolkit http://www.cs.northwestern.edu/~RPS PlayLoad http://www.cs.northwestern.edu/~pdinda/LoadTraces/playload Prescience Lab http://www.cs.northwestern.edu/~plab
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.