Presentation is loading. Please wait.

Presentation is loading. Please wait.

Methodology for High-Speed Clock Tree Implementation in Large Chips

Similar presentations

Presentation on theme: "Methodology for High-Speed Clock Tree Implementation in Large Chips"— Presentation transcript:

1 Methodology for High-Speed Clock Tree Implementation in Large Chips
Ravinder Rachala Aaron Grenat Prashanth Vallur Christopher Ang January 31, 2013

2 Advantages of Custom Clock Distribution
Low skew Smaller AOCV timing uncertainty compared to full CTS Custom buffers are more tolerant to OCV, IR drop, supply noise The plot here displays a scenario where increased skew would require boosting voltage to achieve target Fmax. Effectively skew translates to higher power (dynamic and leakage) for meeting a target frequency. Low Skew High Skew

PLL Clock Spine Macros Showing here a typical CPU floorplan, regular and very constrained problem. Clock trees not cutting into too many blocks where blockages from clock buffers would cause congestion. Same macro can be programmed with varying final buffer strengths as the aspect ratio is the same. Regular and repetitive structure like the above floorplan is conducive to thin, long clock macro structures like above. Here we build 2 unique types of clock macros and stamp them. So, custom macro effort is relatively small compared to more complex floorplans.

4 OLD FLOW - Clock Spine Topology in complex floorplan
In more complex floorplans like above we would end up needing too many custom clock spine macros which are resource intensive and hard to converge in time for chip tapeout. Traditional clock spine macro style is not scalable for today’s complex chips

5 ISSUES with OLD methodology
Very resource intensive. Increasing number of SOCs in roadmap makes this even more challenging Area taken by the clock trees is badly utilized …<10% Increasing size of the macros (of the order of ~20mm) runs risk of not converging through the custom macro/IP build flow Floorplan challenges in accommodating the clock macros and minimizing the number of unique macros typically consumes lot of resource energy and time Re-use of clock macros across projects is heavily restricted by even small floorplan changes between projects

6 TMAC Flow : New Methodology
Clock macros are broken down to cells (called as TMACs: Tiny-MACros) that will be flat instantiations at IP level Connection between the TMACs is done in overlay (or RDL - Route Distribution Layers) TMAC cells Clock Macro 1mm

7 TMAC Flow : New Methodology – sample Clock SPINE + MESH topology
CTS Root buffer or Clock Gater MH (Horizontal Low-Res Layer) MV (Vertical Low-Res Layer)

8 PRIOR work: example Tile/RLM IP floorplan Conduit - 1 Vtree - 1
PLL Tile/RLM Conduit - 1 Vtree - 1 Htree - 8 Total unique clock macros = 10 IP floorplan (All 8 flavors are delay-matched) Bad skew Driving large areas of the design from a corner (i.e., huge cap on the buffer, big current through the wire) causes EM, self-heating issues Long distribution wire susceptible to ringing/reflections (parasitic inductance)

Tile/RLM IP floorplan TMAC Overlay  1 clock spine All TMAC cells connected in overlay More clock coverage PLL TMAC

10 TMAC Flow : New Methodology BENEFITS
Entire distribution is contained in one clock spine Reduces number of circuit and layout resources Frees up area between the TMACs for RLMs/Tiles TMAC library of cells built once per technology node (e.g. GF 28nm), reused across all projects in that process technology Floorplan changes can be easily accommodated even in late stages of design cycle Provides more complete and robust clock coverage. Bad skew zones are avoided, reliability concerns minimized Instance swapping (Sizing clock mesh drivers for power and performance optimization) can be done easily based on the clock mesh load Creates full-custom quality clock spine network with significantly “less” effort

Clock grid optimization techniques - reduced clock metal capacitance (by ~45%) Classic clock mesh pruning methods like on-demand-grid Pushing VIA stack into the MPCTS (Multi-Point CTS) buffer. Providing clock arrival times at each MPCTS entry point on the mesh (SDF file) for full-chip timing flow New MPCTS buffer cell. Connection from M2 pin to MH layer is built into the cell. Pin is elevated to MH layer. New cell is the same size as standard cell. CLK (M2) CLK (MH) Clock mesh (MH Layer) CLK (M2) Standard MPCTS buffer cell. Auto router built connection from ‘CLK’ pin to ‘MH’ clock grid route. Clock mesh (MH Layer) All of this route cap is saved. Skew from circuitous route is avoided.

Import IP/SOC floorplan (DEF or GDS) into Cadence Virtuoso layout XL Merge clock spine DEF with other overlay DEF (top layer power grid + clock mesh etc.) – First Encounter Push down clock design (distribution + mesh/grid) into floorplan views for RLM/tiles to see for CTS buffer placement etc. Draw full clock spine in Cadence Virtuoso XL (schematic, layout) Extract clock routes (StarRCXT) at IP/SOC top level and run timing using Primetime. Export entire clock spine layout to a DEF file (using internal flow)

13 Custom design data to DEF conversion FLOW CHART
def writer gdsii cdl def lvs annotated gdsii file cross reference files internal database data processing tools component cell list

Top level script prunes MH route completely and inserts back shortest possible segment to connect CTS entry buffers to nearest MV layer Draw clock mesh/grid routes in FE (Spec from clock circuit team – route width, space, shielding) Run CES flow. Skew (clock arrival times – SDF file) is reported to full-chip timing flow. Here clock routes are analyzed for EM pass/fail criteria as well. Push down the mesh into the tiles. CTS buffer placement flow is run. Tiles close placement, routing and timing.. All tile DEFs are exported for full clock mesh extraction and spice simulation flow (CES) Extract clock distribution routes at IP/SOC level and run full-chip STA timing (Primetime).

15 Benefits proven in recent AMD SOCs
Less resource needs 32nm SOI APU Graphics IP: 7 clocks. ~30 clock macros. 4 circuit and 4 layout resources 28nm APU Graphics IP: 9 clocks: 1 clock spine DEF. 1.5 circuit and 1 layout resource Area savings 32nm SOI APU Graphics IP area : 98 mm clock macro area: 1.21 mm2  1.23% 28nm APU Graphics IP area: 131 mm clock macro area: 0.18 mm2  0.12% Floorplan flexibility With the new methodology (TMAC flow), high-speed clock distribution can be designed to fit into any floorplan. E.g.: We were able to deliver clock distribution design to a server SOC in ¼ the time it takes in the old clock spine macro flow. Reuse across projects TMAC library (clock buffer cells etc.) developed for a technology process are being leveraged for multiple APU projects.

16 Q & A Thank You

17 Trademark Attribution AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. in the United States and/or other jurisdictions. Other names used in this presentation are for identification purposes only and may be trademarks of their respective owners. ©2012 Advanced Micro Devices, Inc. All rights reserved.

Download ppt "Methodology for High-Speed Clock Tree Implementation in Large Chips"

Similar presentations

Ads by Google