Presentation is loading. Please wait.

Presentation is loading. Please wait.

External Sorting Sort n records/elements that reside on a disk.

Similar presentations


Presentation on theme: "External Sorting Sort n records/elements that reside on a disk."— Presentation transcript:

1 External Sorting Sort n records/elements that reside on a disk.
Space needed by the n records is very large. n is very large, and each record may be large or small. n is small, but each record is very large. So, not feasible to input the n records, sort, and output in sorted order.

2 Small n But Large File Input the record keys.
Sort the n keys to determine the sorted order for the n records. Permute the records into the desired order (possibly several fields at a time). We focus on the case: large n, large file. Alternatively, input front part (including key) of each of the n records so as to fill memory. Permute remainder. Input records, compacting keys together. Sort keys… 1 input pass Input records, write to proper place on disk, one record at a time… 1 input pass and 1 output pass. Even though n random access writes, n is small and each record is large. So, OK. Even with large n large file may need to first determine sort permutation and then rearrange.

3 New Data Structures/Concepts
Tournament trees. Huffman trees. Double-ended priority queues. Buffering. Ideas also may be used to speed algorithms for small instances by using cache more efficiently.

4 External Sort Computer Model
MAIN ALU DISK

5 Disk Characteristics read/write head tracks Seek time Latency time
Approx. 100,000 arithmetics Latency time Approx. 25,000 arithmetics Transfer time Data access by block

6 Traditional Internal Memory Model
MAIN ALU Let’s digress for a moment and take a closer look at the computer model sans disk. Not a very accurate model for internal memory algorithms. Classical algorithm analysis is done using this model. Number of operations is closely related to number of memory accesses. So, we may simply count number of operations. This model has severe limitations when trying to explain observed performance numbers.

7 Matrix Multiplication
for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) c[i][j] += a[i][k] * b[k][j]; ijk, ikj, jik, jki, kij, kji orders of loops yield same result. All perform same number of operations. But run time may differ significantly!

8 More Accurate Memory Model
MAIN ALU 8-32 32KB 256KB 1GB 1C 2C 10C 100C Cost of L2 miss may be much larger on new processors > 100x rather than 10x of L1 miss. Cache organization—cache lines. In certain application, (e.g., packet routing), people count only cache misses and assume arithmetics are free.

9 2D Array Representation In Java, C, and C++
b c d e f g h i j k l x[] int x[3][4]; The 4 separate arrays are x, x[0], x[1], and x[2]. When you access x[0][0], a cache-line load is brought into L2 from main memory. Array of Arrays Representation

10 ijk Order for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) c[i][j] += a[i][k] * b[k][j]; C A B * For analysis, assume a one-level cache with an LRU replacement policy. =

11 ijk Analysis C A B = * Block size = width of cache line = w. Assume one-level cache. C => n2/w cache misses. A => n3/w cache misses, when n is large. B => n3 cache misses, when n is large. Total cache misses = n3/w(1/n w). The analyses is rather simplistic. An accurate analysis would need to know the cache line replacement strategy in use (direct mapped, random, LRU, etc.).

12 ikj Order for (int i = 0; i < n; i++) for (int k = 0; k < n; k++) for (int j = 0; j < n; j++) c[i][j] += a[i][k] * b[k][j]; C A B * =

13 ikj Analysis C A B = * C => n3/w cache misses, when n is large. A => n2/w cache misses. B => n3/w cache misses, when n is large. Total cache misses = n3/w(2 + 1/n).

14 ijk Vs. ikj Comparison ijk cache misses = n3/w(1/n + 1 + w).
ikj cache misses = n3/w(2 + 1/n). ijk/ikj ~ (1 + w)/2, when n is large. w = 4 (32-byte cache line, double precision data) ratio ~ 2.5. w = 8 (64-byte cache line, double precision data) ratio ~ 4.5. w = 16 (64-byte cache line, integer data) ratio ~ 8.5.

15 Prefetch Prefetch can hide memory latency
Successful prefetch requires ability to predict a memory access much in advance Prefetch cannot reduce energy as prefetch does not reduce number of memory accesses

16 Faster Internal Sorting
May apply external sorting ideas to internal sorting. Internal tiled merge sort gives 2x (or more) speedup over traditional merge sort.

17 External Sort Methods Base the external sort method on a fast internal sort method. Average run time Quick sort Worst-case run time Merge sort

18 Internal Quick Sort Use 6 as the pivot.
2 8 5 11 10 4 1 9 7 3 Use 6 as the pivot. Middle group size is 1. In external sort adaptation, we make middle group as large as possible. 2 8 5 11 10 4 1 9 7 3 6 Sort left and right groups recursively.

19 Quick Sort – External Adaptation
Middle group DISK input small large Each buffer is, say, 1 block long. Rest of memory is used for middle group. 3 input/output buffers input, small, large rest is used for middle group

20 Quick Sort – External Adaptation
DISK input small large Middle group Fill input buffer. Delay when input buffer is empty or small/large full. Alternate between removing min and max elements of middle group so as to balance size of small and large groups. fill middle group from disk if next record <= middlemin send to small if next record >= middlemax send to large else remove middlemin or middlemax from middle and add new record to middle group

21 Quick Sort – External Adaptation
DISK input small large Middle group Fill input buffer when it gets empty. Write small/large buffer when full. Write middle group in sorted order when done. Double-ended priority queue. Use additional buffers to reduce I/O wait time.


Download ppt "External Sorting Sort n records/elements that reside on a disk."

Similar presentations


Ads by Google