IIS Progress Report 2016/01/11. Goal Propose an energy-efficient scheduler that minimize the power consumption while providing sufficient computing resources.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Energy-Efficient System Virtualization for Mobile and Embedded Systems Final Review 2014/01/21.
CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
Virtual Memory Introduction to Operating Systems: Module 9.
BFS: Brain F*ck Scheduler Jacob Chan. Objectives  Brain F*ck Scheduling  What it is  How it works  Features  Scalability  Limitations  Definition.
A. Frank - P. Weisberg Operating Systems Advanced CPU Scheduling.
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor Paul Regnier † George Lima † Ernesto Massa † Greg Levin ‡ Scott Brandt ‡
Project Overview 2014/05/05 1. Current Project “Research on Embedded Hypervisor Scheduler Techniques” ◦ Design an energy-efficient scheduling mechanism.
Cs238 CPU Scheduling Dr. Alan R. Davis. CPU Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Job scheduling Queue discipline.
Progress Report Design, implementation, experiments, and demo plan 2014/12/03 1.
Operating System Process Scheduling (Ch 4.2, )
Distributed Process Management1 Learning Objectives Distributed Scheduling Algorithms Coordinator Elections Orphan Processes.
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 CPU SCHEDULING.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Fair Resource Access & Allocation  OS Scheduling
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
Progress Report 2014/02/12. Previous in IPDPS’14 Energy-efficient task scheduling on per- core DVFS architecture ◦ Batch mode  Tasks with arrival time.
1 Distributed Energy-Efficient Scheduling for Data-Intensive Applications with Deadline Constraints on Data Grids Cong Liu and Xiao Qin Auburn University.
An Energy-Efficient Hypervisor Scheduler for Asymmetric Multi- core 1 Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Uniprocessor Scheduling
An Energy-efficient Task Scheduler for Multi-core Platforms with per-core DVFS Based on Task Characteristics Ching-Chi Lin Institute of Information Science,
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
CS6502 Operating Systems - Dr. J. Garrido Deadlock – Part 2 (Lecture 7a) CS5002 Operating Systems Dr. Jose M. Garrido.
Progress Report 2013/11/07. Outline Further studies about heterogeneous multiprocessing other than ARM Cache miss issue Discussion on task scheduling.
An Energy Efficient MAC Protocol for Wireless LANs, E.-S. Jung and N.H. Vaidya, INFOCOM 2002, June 2002 吳豐州.
IIS Progress Report 2015/10/12. Problem Revisit Given a set of virtual machines, each contains some virtual cores with resource requirements. Decides.
Research on Embedded Hypervisor Scheduler Techniques 2014/10/02 1.
Styresystemer og Multiprogrammering Block 3, 2005 Deadlocks Robert Glück.
Progress Report 2013/08/22. Model Modification Each core works under the same frequency due to hardware limitation. A task can have different processing.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Distributed Vertex Coloring. 2 Vertex Coloring: each vertex is assigned a color.
Progress Report 07/30. Virtual Core Scheduling Problem For every time period, the hypervisor scheduler is given a set of virtual cores with their operating.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Scheduling.
-1/16- Maximum Battery Life Routing to Support Ubiquitous Mobile Computing in Wireless Ad Hoc Networks C.-K. Toh, Georgia Institute of Technology IEEE.
Tao Zhu1,2, Chengchun Shu1, Haiyan Yu1
Ching-Chi Lin Institute of Information Science, Academia Sinica
Process Scheduling B.Ramamurthy 9/16/2018.
Chapter 6: CPU Scheduling
Process Scheduling B.Ramamurthy 12/5/2018.
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Process Scheduling B.Ramamurthy 2/23/2019.
Research on Embedded Hypervisor Scheduler Techniques
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Progress Report 2012/12/20.
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Progress Report 2015/01/28.
IIS Progress Report 2016/01/18.
Progress Report 2013/08/08.
Progress Report 11/05.
Presentation transcript:

IIS Progress Report 2016/01/11

Goal Propose an energy-efficient scheduler that minimize the power consumption while providing sufficient computing resources in order to meet the expected throughput of each virtual core. ◦ Amount of workloads in a time period.

Model - Cores Assume n virtual cores, m physical cores. For every time period ◦ Virtual core i has expected throughput/ workload w i, w i ∈ W. ◦ Physical core j with frequency f j, f j ∈ F  F: a set of available frequencies on the core.

Model – Power Consumption The power consumption P of core j is a function of f j and its load L j. ◦ L j = (the amount of workload on j) / f j. ◦ With given f j, P is linear to L j. We write P as the following: ◦ m(f j ): the gradient. ◦ c j : idle power

Objective Generate f j and a set of a i,j that where Constraints:

Finding A Solution Cannot apply simplex method to find a i,j and f i. ◦ The objective function is not in the form of where c i is given. Our objective function

Deciding the Frequency Assume f n = α f n-1. Two scenarios ◦ 1. evenly distribute the workloads to two cores with the same frequency f n. ◦ 2. uneven distribution, one with f n+1 and the other with frequency f n-1.

Deciding the Frequency(Cont.) if m(f n ) = α m(f n-1 ), P’-P = 0 ◦ However, in most real world cases, m(f n ) > α m(f n-1 ). ◦ “Choose the frequencies evenly among every cores (with the same type) is more energy- efficient.”

Selecting Frequencies “Choose the frequencies evenly among every cores (with the same type) is more energy- efficient.” However, doesn’t works. ◦ Some w i may be larger than f.

Our Method Decide the f j, j = 0 ~ m-1 ◦ First sort w i in descending order, and set f 0 ~ f m-1 as w 0 ~ w m-1. ◦ While ∑ w i > ∑ f j, increase the smallest f j by one level until the condition is violated.

Assigning Cores Start from the largest w i, assign it to the core with largest frequency f j with spare resource s j. ◦ s j : spare resource on core j If w i > s j, assign (w i - s j ) to core j+1, which is the core with second largest frequency f j+1 with spare resource.

Scheduling vCPUs Theoretically we can apply Open-Shop Scheduling Problem (OSSP) with preemption to generate a scheduling plan. However, scheduling plans generated this way may not be practical in real world. ◦ Do not consider interrupts, I/O delay … etc.

Energy Credit We transform a i,j into energy-credit of each virtual core i on physical core j. ◦ Executing a vCPU on a physical core consumes credits. ◦ Credits are re-filled every period of time. Schedule vCPUs according to their energy-credit.

Objective Each vCPU has a credit vector. ◦ Each entry is the amount of remaining credits to a physical core. Minimize the remaining credits of all vCPU before the credit refilling.

Selecting vCPU for Execution Each time a physical core becomes available, pick a vCPU with remaining credit for execution. ◦ Global run-queue v.s. per-core run-queue. ◦ Per-core run-queue  Each core has a run-queue.  A vCPU can only exist in one of the run-queue simultaneously.  FIFO

Algorithm #1 For each core, keeps executing a vCPU until  1. An interrupt occurs  2. wait for I/O  3. the vCPU has no credit remains ◦ If 1.:  Resume the vCPU after handling the interrupt. ◦ Else:  Select the next vCPU with credit from the run-queue.  If no available vCPU in run-queue:  If 2:  Idle  else:  Migrate a vCPU that has remaining credits of this core back.

Algorithm #2 For each core, keeps executing a vCPU until  1. An interrupt occurs  2. wait for I/O  3. the consecutive execution time of the vCPU exceeds t ms. ◦ If 1.:  Resume the vCPU after handling the interrupt. ◦ Elif 2:  Select the next vCPU with credit from the run-queue.  If no available vCPU in run-queue:  idle ◦ Else:  Compute the expected waitng time.

Expected Waiting Time For each non-zero entry i of the credit vector of the vCPU: ◦ Compute the expected waiting time of the vCPU on core i.  Simple way: n * t  n: # of vCPUs with remaining credits in the run-queue. Migrate the vCPU to the core with least expected waiting time.

Observation ◦ Since a vCPU can have credits on at most two cores using our assigning method, the overhead of finding which core to migrate is small. Core 1 Core 2 Core 3 Core 4 … vCPU vCPU vCPU vCPU vCPU …

Simulation Generate workloads with I/O and interrupts. ◦ The frequency and length follows some distributions. Compare the utilization, remaining credits and power consumption with other scheduling algorithms.

Some Comparing Algorithms Credit-based OSSP with preemption Most Remaining Credit First …

Discussion