Presentation is loading. Please wait.

Presentation is loading. Please wait.

Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science.

Similar presentations


Presentation on theme: "Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science."— Presentation transcript:

1 Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science Evanston, Illinois, USA ASPLOS March 3, 2008 Seattle, Washington, USA1

2  Traditional performance metrics do not measure user-perceived performance well  Our performance metrics measure user- perceived performance better  PICSEL is a power management policy that uses our metrics to achieve system power improvements of up to 12.1% compared to existing policies 2

3 3 CPU Display Main Memory Screenshot Compare consecutive screenshots Change frequency Redraw screen

4 4 Display Main Memory Screenshot Compare consecutive screenshots CPU Change frequency Redraw screen “The ultimate goal of a computer system is to satisfy the user”

5  Power problem  DVFS  System performance  Traditional vs. user-perceived  PICSEL  How it works  Results  Conclusions 5

6  Energy-hungry processors present three major problems:  Higher energy consumption  Shorter battery life  Higher temperatures 6

7  Dynamic voltage and frequency scaling (DVFS) addresses all three problems  Trades off processor frequency for energy savings  Commonly used  Ideal DVFS policy: Find the lowest level of performance acceptable to the user to maximize power savings 7

8  Human in loop is often rate-limiter 8 Output Devices (kHz) User (Hz) Processor (GHz) Input Devices (kHz)

9  Traditional performance metrics focus on processor performance  “Close to metal” 9 Output Devices User Input Devices Processor (IPS)

10 10  User-perceived performance metrics focus on interface device performance  “Close to flesh” Output Devices (Display, Speakers) User (N/A) Input Devices (Mouse, Keyboard) Processor

11  Use change in pixel intensities as metric for user-perceived performance 11

12 Perception Informed CPU performance Scaling to Extend battery Life 12

13  Windows GDI Screenshot  Capture contiguous area of screen  Repeat periodically  Compare RGB intensities across samples 13 RiRi GiGi BiBi R i-1 G i-1 B i-1 -= RΔRΔ G Δ B Δ Cached

14  Average Pixel Change (APC)  APC = (R Δ + G Δ + B Δ ) / 3  Averaged across all pixels  Measures “slowness” of display  Rate of Average Pixel Change (APR)  APR = (APC i – APC i-1 )/(T i – T i-1 )  Measures “jitter” of display 14

15  PICSEL uses <2% CPU utilization  Cost of target applications is 50-100% CPU utilization 15

16 16

17 17 APC APR Make a decision on these marks Time Increase frequency “No change” band

18 State VariablesAdaptation Parameters Processor frequency (f)Hysteresis factor (α) APC in the last interval (μ APC )APC change threshold (ρ) APR in the last interval (μ APR )APR change threshold (γ) 18 IF (APC init - μ APC ) < ρ ×(1-α) × APC init OR |APR init - μ APR | < γ ×(1-α) × APR init Reduce f by one level Reset α of the last level to 0.0 ELSE Increase f by one level Increment α by 0.1

19 19 PICSEL VersionT initialize (sec) T decide (sec) APC Change APR Change Hyst. Factor Conservative PICSEL (cPICSEL) 1070.050.150.0 Aggressive PICSEL (aPICSEL) 1070.100.300.0  All values chosen by authors after testing using target applications  Too long (243 days) to construct ideal values  User evaluation “closed the loop”

20 20  20 users  Shockwave animation and DVD movie play for 2 minutes  FIFA game plays for 3.5 minutes  Three randomly selected trials per application  One double-blind DVFS policy for each trial  User rates satisfaction from one (lowest) to five (highest) after each trial

21 21

22 22

23 23

24 DVFS Policy System Power Improvement Dynamic Power Improvement CPU Peak Temperature Reduction User Satisfaction (out of five) aPICSEL 12.1%18.2% 4.3  C 3.65* cPICSEL 7.1%9.1% 1.7  C 3.80** Windows DVFS Control 3.68 24 * Not Different with 95% confidence ** Different with 90% confidence

25 DVFS Policy System Power Improvement Dynamic Power Improvement CPU Peak Temperature Reduction User Satisfaction (out of five) aPICSEL 12.1%18.2% 4.3  C 3.65* cPICSEL 7.1%9.1% 1.7  C 3.80** Windows DVFS Control 3.68 25 * Not Different with 95% confidence ** Different with 90% confidence

26 26

27 27 Perceived slowdown

28 DVFS Policy Total Thermal Emergencies during Game for All Users aPICSEL 52 cPICSEL 51 Windows DVFS59 28  User satisfaction is maximized by cPICSEL  Frequency is high enough to deliver good performance but not high enough to trigger thermal emergencies

29 29  Display performance is a better metric for controlling DVFS than processor performance  Existing processor performance-based DVFS policies have slack that can be exploited  Cost of monitoring the display output is low  User satisfaction is the same or better

30  Based on GUI events  Gurun, S. and Krintz, C. 2005. AutoDVS: an Automatic, General-purpose, Dynamic Clock Scheduling System for Hand-held Devices. In Proc. of the 5 th ACM Int. Conf. on Embedded Software (EMSOFT’05), 218-226.  Based on application messages  Flautner, K. and Mudge, T. 2002. Vertigo: Automatic Performance-Setting for Linux. ACM SIGOPS Operating Systems Review 36, SI (Winter 2002), 105-116. 30

31 Check out “Empathic Computer Architectures and Systems” at Wild and Crazy Ideas and visit empathicsystems.org for more user-centered systems research 31


Download ppt "Arindam Mallik Jack Cosgrove Robert P. Dick Gokhan Memik Peter Dinda Northwestern University Department of Electrical Engineering and Computer Science."

Similar presentations


Ads by Google