Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Embedded Computing © 2007 Elsevier Lecture 11: Memory Optimizations Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte.

Similar presentations


Presentation on theme: "High Performance Embedded Computing © 2007 Elsevier Lecture 11: Memory Optimizations Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte."— Presentation transcript:

1 High Performance Embedded Computing © 2007 Elsevier Lecture 11: Memory Optimizations Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte Based on slides and textbook from Wayne Wolf

2 © 2006 Elsevier Topics List scheduling Loop Transformations Global Optimizations Buffers, Data Transfers, and Storage Management Cache and Scratch-Pad Optimizations Main Memory Optimizations

3 © 2006 Elsevier List Scheduling Given DFG graph, how do you assign (schedule) operations to particular slots? Schedule based on priority function  Compute longest path to leaf function  Schedule nodes with longest path first  Keep track of readiness (based on result latency) Goal: maximize ILP, fill as many issue slots with useful work as possible (minimize NOPs)

4 © 2006 Elsevier List Scheduling Example  Heuristic – no guarantee of optimality  For parallel pipelines account for structural hazards Pick 33 (4), 1(4), 2(3), 4(2), 5(0), 6(0) Pick 11(4), 2(3), 4(2), 5(0), 6(0) Pick 22(3), 4(2), 5(0), 6(0) Pick 44(2), 5(0), 6(0) Pick 6 (5 nr)5(0), 6(0) Pick 55(0)

5 © 2006 Elsevier Local vs. Global Scheduling Single-entry, single-exit (e.g. basic block) scope  Limited opportunity since only 4-5 instructions Expand scope  Unroll loop body, inline small functions  Construct superblocks and hyperblocks, which are single-entry, multiple exit sequences of blocks Code motion across control-flow divergence  Speculative, consider safety (exceptions) and state  Predication is useful to nullify wrong-path instructions

6 © 2006 Elsevier Memory-oriented optimizations Memory is a key bottleneck in many embedded systems. Memory usage can be optimized at any level of the memory hierarchy. Can target data or instructions. Global memory analysis can be particularly useful.  It is important to size buffers between subsystems to avoid buffer overflow and wasted memory.

7 © 2006 Elsevier Loop transformations Data dependencies may be within or between loop iterations. Ideal loops are fully parallelizable. A loop nest has loops enclosed by other loops. A perfect loop nest has no conditional statements.

8 © 2006 Elsevier Types of loop transformations Loop permutation changes order of loops. Index rewriting changes the form of the loop indexes. Loop unrolling copies the loop body. Loop splitting creates separate loops for operations in the loop body. Loop fusion or loop merging combines loop bodies. Loop padding adds data elements to an array to change how the array maps into memory.

9 © 2006 Elsevier Polytope model Commonly used to represent data dependencies in loop nests. Loop transformations can be modeled as matrix operations:  Each column represents iteration bounds. j i

10 © 2006 Elsevier Loop permutation Changes the order of loop indices Can help reduce the time needed to access matrix elements 2-D arrays in C are stored in row major order Access the data row by row. Example of matrix-vector multiplication

11 © 2006 Elsevier Loop fusion Combines loop bodies for (i = 0; i <N; i++)for (i = 0; i <N; i++) { x[i] = a[i] * b[i]; for (i = 0; i <N; i++) y[i] = a[i] * c[i]; y[i] = a[i] * c[i]; } Original loopsAfter loop fusion How might this help improve performance?

12 © 2006 Elsevier Buffer management [Pan01] In embedded systems, buffers are often used to communicate between subsystems Excessive dynamic memory management wastes cycles, energy with no functional improvements. Many embedded programs use arrays that are statically allocated Several loop transformations have been developed to make buffer management more efficient Before: for (i=0; i<N; ++i) for (j=0; j<N-L; ++j) b[i][j] = 0; for (i=0; i<N; ++i) for (j=0; j<N-L; ++j) for (k=0; k<L; ++k) b[i][j] += a[i][j+k]; After: for (i=0; i<N; ++i) for (j=0; j<N-L; ++j) b[i][j] = 0; for (k=0; k<L; ++k) b[i][j] += a[i][j+k]; closer

13 © 2006 Elsevier Buffer management [Pan01] int a_buf[L]; int b_buf; for (i = 0; i < N; ++i) { initialize a_buf for (j = 0; j < N - L; ++j) { b_buf = 0; a_buf[(j + L - 1) % L] = a[i][j + L - 1]; for (k<0; k < L; ++k) b_buf += a_buf[(j + k)%L]; b[i][j] = b_buf; } Loop analysis can help to make data reuse more explicit. Buffers are declared in the program Don’t need to exist in final implementation.

14 © 2006 Elsevier Cache optimizations – [Pan97] Strategies:  Move data to reduce the number of conflicts.  Move data to take advantage of prefetching. Need:  Load map.  Information on access frequencies.

15 © 2006 Elsevier Cache conflicts Assume a direct-mapped cache of size C = 2 m with a cache line size of M words Memory address A maps to cache line k = (A mod C)/M If N is a multiple of C, then a[i], b[i], and c[i] all map to the same cache line

16 © 2006 Elsevier Reducing cache conflicts Could increase the cache size  Why might these be a bad idea? Add L dummy words between adjacent arrays Let f(x) denote the cache line to which the program variable x is mapped. For L = M, with a[i] starting at 0, we have

17 © 2006 Elsevier Scalar variable placement Place scalar variables to improve locality and reduce cache conflicts.  Build closeness graph that indicates the desirability of keeping sets of variables close in memory. M adjacent words are read on a single miss  Group variables into M word clusters.  Build a cluster interference graph Indicates which cluster map to same cache line  Use interference graph to optimize placement. Try to avoid interference

18 © 2006 Elsevier Constructing the closeness graph Generate an access sequence  Create a node for memory access in the code  Directed edge between nodes indicates successive access  Loops weighted with number of loop iterations Use access sequence to construct closeness graph  Connect nodes within distance M of each other x to b is distance 4 (count x and b)  Links give # of times control flows between nodes  Requires O(Mn 2 ) time for n nodes

19 © 2006 Elsevier Group variables into M word clusters. Determine which variables to place on same line  Put variables that will frequently be accessed closely together on the same line. Why? Form clusters to maximize the total weight of edges in all the clusters  Greedy AssignClusters algorithm has complexity O(Mn 2 )  In previous example M = 3 and n = 9

20 © 2006 Elsevier Build cluster interference graph Identify clusters that should not map to the same line  Convert variable access sequence to cluster access sequence  Weight in graph corresponds to number of times cluster access alternates along execution path High weights should not be mapped to the same line

21 © 2006 Elsevier Assigning memory locations to clusters Find an assignment of clusters in a CIG to memory locations, such that MemAssignCost (CIG) is minimized.

22 © 2006 Elsevier Array placement Focus on arrays accessed in innermost loops. Why? Arrays are placed to avoid conflicting accesses with other arrays. Don’t worry about clustering, but still construct the interference graph – edges dictated by array bounds

23 © 2006 Elsevier Avoid conflicting memory locations Given addresses X, Y. Cache with k lines each holding M words. Formulas for X and Y mapping to the same cache line:

24 © 2006 Elsevier Array assignment algorithm [Pan97] © 1997 IEEE

25 Results from data placement [Pan97] © 2006 Elsevier Data cache hit rates improve 48% on average Results in average speedup of 34% Results for a 256-byte cache and for kernels

26 © 2006 Elsevier On-Chip vs. Off-Chip Data Placement [Pan00] explore how to partition data to optimize performance

27 © 2006 Elsevier On-Chip vs. Off-Chip Data Placement Allocate static variables at compile time Map all scalar values/constants to scratchpad Map all arrays too large for scratchpad into DRAM Only arrays with intersecting lifetimes will have conflicts. Calculate several parameters:  VAC(u): variable access count – how many times is u accessed.  IAC(u): interference access count – how many times are other variables accessed during u’s lifetime.  IF(u): total interference count = VAC(u) + IAC(u).

28 © 2006 Elsevier On-Chip vs. Off-Chip Data Placement Also need to calculate LCF(u): loop conflict factor.  p is the number of loops accessed by u  k(u) is the number of accesses to u  K(v) is the number of accesses to variables other than u And TCF(u): total conflict factor.

29 © 2006 Elsevier Scratch pad allocation formulation AD( c ): access density.

30 © 2006 Elsevier Scratch pad allocation algorithm [Pan00] © 2000 ACM Press

31 © 2006 Elsevier Scratch pad allocation performance [Pan00] © 2000 ACM Press

32 © 2006 Elsevier Main memory-oriented optimizations Memory chips provide several useful modes:  Burst mode accesses sequential locations. Provide start address and length Reduce number of addresses sent, increase transfer rate  Paged modes allow only part of the address to be transmitted. Address split into page number and offset Store page number in register to quickly access values in same page Access times depend on address(es) being accessed.

33 Banked memories © 2006 Elsevier Banked memories allow multiple memory banks to be accessed in parallel


Download ppt "High Performance Embedded Computing © 2007 Elsevier Lecture 11: Memory Optimizations Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte."

Similar presentations


Ads by Google