Presentation is loading. Please wait.

Presentation is loading. Please wait.

Microsoft Proprietary High Productivity Computing Large-scale Knowledge Discovery: Co-evolving Algorithms and Mechanisms Steve Reinhardt Principal Architect.

Similar presentations


Presentation on theme: "Microsoft Proprietary High Productivity Computing Large-scale Knowledge Discovery: Co-evolving Algorithms and Mechanisms Steve Reinhardt Principal Architect."— Presentation transcript:

1 Microsoft Proprietary High Productivity Computing Large-scale Knowledge Discovery: Co-evolving Algorithms and Mechanisms Steve Reinhardt Principal Architect Microsoft Prof. John Gilbert, UCSB Dr. Viral Shah, UCSB

2 Microsoft Proprietary Context for Knowledge Discovery From Debbie Gracio and Ian Gorton, PNNL Data Intensive Computing Initiative

3 Microsoft Proprietary Knowledge Discovery (KD) Definition Data-intensive computing: when the acquisition and movement of input data is a primary limitation on feasibility or performance Simple data mining: searching for exceptional values on elemental measures (e.g., heat, #transactions) Knowledge discovery: searching for exceptional values on associative/social measures (e.g., most between, belonging to greatest number of valuable reactions)

4 Microsoft Proprietary Today’s Biggest Obstacle in the KD Field Lack of fast feedback between domain experts and infrastructure/tool developers about good usable scalable KD software platforms Need to accelerate the rate of learning about both good KD algorithms and good KD infrastructure Domain experts want: Good infrastructure that works … and scales greatly and runs fast Flexibility to develop/tweak algorithms to suit their needs Algorithms with strong math basis But don’t know The best approach or algorithms Infrastructure developers want: Clear audience for what they develop Architecture that copes with client, cluster, cloud, GPU, and huge data But don’t know The best approach Need to get good (not perfect) scalable platforms in use to co-evolve towards best approaches and algorithms

5 Microsoft Proprietary Candidate Approaches Ad hoc“Visitor”Sparse-matrix-based DescriptionBuild each algorithm from ground up Tailor actions at key points in graph traversal Cast graphs as sparse matrices and use sparse linear algebraic operations ExampleMetisBoost Graph Library, PregelKDT Pros Fast on single node, since highly tailored Fast, since tailored Extensible to out-of- memory formats (Pregel) Proven math basis Built-in tolerance for high cluster latency Good use of local memory hierarchy Extensible to out-of- memory formats Cons Unclear math basis Devpt is time-consuming, since no common kernels Scaling is hard Poor use of local memory hierarchy Unclear math basis Each alg may need to cope with high cluster latency Poor use of local memory hierarchy Mind-bender without good graph API on top Notes Not at domain-expert level Graph layer at domain- expert level

6 Microsoft Proprietary KDT Layers: Enable overloading with various technologies Betweenness Centrality … Community Detection Elementary Mode Analysis Barycentric Clustering Local SpGEMM Local SpRef/ SpAsgn Local SpMV Local SpAdd Local SpGEMM on semi- rings Parallel/distributed operations (constructors, SpGEMM, SpMV, SpAdd, SpGEMM semi-rings, I/O) kdt. scipy. Local I/O Local constructors All Pairs Shortest Path Local SpGEMM (GPU) Parallel/distributed operations (in-memory (Star-P) or out-of-memory (DryadLINQ-based)) All Pairs Shortest Path (Cray XMT) …

7 Microsoft Proprietary DryadLINQ: Query + Plan + Parallel Execution Dryad – Distributed-memory coarse-grain run-time – Generalized MapReduce – Using computational vertices and communication channels to form a dataflow execution graph LINQ (Language INtegrated Query) – A query-style language interface to Dryad – Typical relational operators (e.g., Select, Join, GroupBy) Scaling for histogram example – Input data 10.2TB, using 1,800 cluster nodes, 43,171 execution-graph vertices spawning 11,072 processes, creating 33GB output data in 11.5 minutes of execution Files, TCP, FIFO, Network sched data plane control plane NSPD V VV Job managercluster

8 Microsoft Proprietary Star-P Bridges Scientists to HPCs MATLAB Star-P enables domain experts to use parallel, big-memory systems via productivity languages (e.g., the M language of MATLAB) Knowledge discovery scaling with Star-P Kernels to 55B edges between 5B vertices, on 128 cores (consuming 4TB memory) Compact applications to 1B edges on 256 cores

9 Microsoft Proprietary Next Steps Get prototypes available for early experience and feedback – in-memory and out-of-memory targets of KDT – with graph layer – likely exposed via Python library interface

10 Microsoft Proprietary © 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista, Windows 7, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.


Download ppt "Microsoft Proprietary High Productivity Computing Large-scale Knowledge Discovery: Co-evolving Algorithms and Mechanisms Steve Reinhardt Principal Architect."

Similar presentations


Ads by Google