Presentation is loading. Please wait.

Presentation is loading. Please wait.

La Investigación generadora de riqueza WINCO`05 México D.F., México, 13 de Abril de 2005 Prof. Mateo Valero, UPC, Barcelona.

Similar presentations

Presentation on theme: "La Investigación generadora de riqueza WINCO`05 México D.F., México, 13 de Abril de 2005 Prof. Mateo Valero, UPC, Barcelona."— Presentation transcript:

1 La Investigación generadora de riqueza WINCO`05 México D.F., México, 13 de Abril de 2005 Prof. Mateo Valero, UPC, Barcelona

2 2 Outline r High Performance Research Group at UPC r Centers of Supercomputing: r CEPBA r CIRI r BSC r Future context in Europe r Networks of Excellence r Large Scale Facilities

3 3 Erkki Liikanen Brussels, 21 April 2004

4 4

5 5

6 6 Basic concepts about research r Fundamental/basic research versus applied research r Good versus bad research r Good research produces always wealth r Short/Medium/Long Term Research r Products of good research: r Papers and patents r Educated people r Good education is a key component in this picture r Cross-Pollination between Research groups and companies are the other part of the movie r To promote good research is the only way Europe has to be competitive in a short/long future

7 7 Historia r Tesis: r Mucha dificultad r FIB r Crear departamento: asignaturas, contratar,.. r Empezar a investigar r Situación española… no hay $, no existe nada,… CICYT r Decisiones Estratégicas: r Arquitectura de Computadores r Supercomputadores

8 8 Computer Architecture r Computer Architecture is a rapidly changing subject r Technology changes very fast r New applications emerge continuously r Computer Architects must deal with both technology and applications r CMOS Technology is coming to an end r A new group of applications is appearing r There is a great opportunity for high performance architectures for these applications

9 9 Supercomputers r Faster computers in the world r Used to simulate r Mainly fabricated by USA companies r No experience in Spain r Europe uses and produces software

10 10 Entrada de España en la EU r Internacionalización de la Investigación r Nuevas oportunidades r Antes y despues de 1986 r Proyectos industriales r Usamos estos proyectos para crecer en investigación básica

11 11 CICYT: Spanish Projects Parallelism Exploitation in High Speed Architectures TIC Architecture, Tools and Operating Systems for Multiprocessors TIC HLL-oriented Architectures TIC High Performance Computing II TIC C02-01 UdZ, URV and ULPGC High Performance Computing III TIC C02-01 UVall Architectures and Compilers for Supercomputers TIC Parallel Architectures for Symbolic Computation TIC High Performance Computing TIC Microkernel/applications Cooperation in Multiprocessor Systems TIC High Performance Computing IV TIC C02-01

12 12 People: large research group r 79 researchers r 34 PhD r 45 doing PhD Now

13 13 Research topics and target platforms TaskSubtaskTarget architecture T1: Computer architecture T1.1: Processor microarchitectureSP T1.2: Memory hierarchySP and small MP T1.3: Code generation and optimizationSP T2: Compilers, execution environments and tools T2.1: Sequential code optimizationSP T2.2: OpenMPAll MP T2.3: Extensible execution environmentsAll MP and GRID T2.4: ToolsAll MP and GRID T3: Algorithms and applications T3.1: Numerical applicationsAll of them T3.2: Non-numerical applicationsSP and small MP T3.3: Distributed applicationsAll MP and GRID

14 14 Computer architecture (uniprocessor) r Dynamically scheduled (superscalar) r Front-end engines: instruction fetch mechanisms and branch predictors r Speculative execution: data and address prediction r Organization of resources: Register file, functional units, Cache organization, prefetching … r Kilo-instruction Processors r Not only performance: area and power consumption

15 15 Computer architecture (uniprocessor) r Statically scheduled (VLIW) r Organization of resources (functional units and registers) r Not only performance: area and power consumption r Advanced vector and multimedia architectures r Vector units for superscalar and VLIW architectures r Memory organization r Data and instruction level parallelism

16 16 Computer architecture (multiprocessor) r Multithreaded (hyperthreaded) architectures r Front-end engines and trace caches r Speculation at different levels r Dynamic management of thread priorities r Shared-memory multiprocessors r Memory support for speculative parallelization (hardware and software)

17 17 System software (multiprocessor) r OpenMP compilation r Proposal of language extensions to the standard r Compiler technology for OpenMP r OpenMP runtime systems r Parallel library for OpenMP (SGI Origin, IBM SP2, Intel hyperthreaded, …) r Software DSM (Distributed Shared Memory) r Intelligent runtimes: load balancing, data movement, …

18 18 System software (multiprocessor) r Scalability of MPI r IBM BG/L with 64K processors r Prediction of messages r Programming models for the GRID r Grid superscalar

19 19 Algorithms and applications r Solvers for linear equations systems r Out-of-core kernels and applications r STORM r Metacomputing tool for performing stochastic simulations r Data bases r Sorting, communication in join operations r Query optimization r Memory management in OLTP

20 20 PhD programs r Research topics are part of our PhD programs r Computer Architecture and Technology at UPC r Quality award: MCD r 42% total number of credits, r 30 PhD in the last 5 years (66% of whole program)

21 21 International collaboration r Industry r Intel, IBM Watson, Toronto, Haifa and Germany Labs (*), Hewlett-Packard, STMicroelectronics, SGI, Cray, Compaq r University and research laboratories r Univ. of Illinois at Urbana-Champaign, Wisconsin- Madison, California at Irvine, William and Mary, Delft, KTH, … (more than 60) r Major research laboratories in USA: NASA Ames, San Diego (SDSC), Lawrence Livermore (LLNL), … r Standardization committees r OpenMP Futures in the Architecture Review Board (ARB) … with joint publications and developments (*) Part of the CEPBA-IBM Research Institute research agreement

22 22 International collaboration r Pre- and post doctoral short/medium/long stays r Industry: Intel, SUN, IBM, Hewlett-Packard,... r University: Univ. of Illinois at Urbana-Champaign, Univ. of California at Irvine, Univ. of Michigan,... r Visiting professors and researchers r More than 70 talks in our weekly seminar (last two years, external researchers) r PhD courses: : 5 courses (135 hours) : 4 courses (110 hours) : 6 courses (165 hours)

23 23 Some of the seminar guests Krste Asanovic (MIT) Venkata Krishnan (Compaq-DEC) Trevor Mudge (U. Michigan) Jim E. Smith (U. Wisconsin) Luiz A. Barroso (WRL) Josh Fisher (HP Labs) Michel Dubois (USC) Ronny Ronnen (Intel, Haifa) Josep Torrellas (UIUC) Per Stenstrom (U. Gothenburg) Wen-mei Hwu (UIUC) Jim Dehnert (Transmeta) Fred Pollack (Intel) Sanjay Patel (UIUC) Daniel Tabak (George Mason U.) Walid Najjar (Riverside) Paolo Faboroschi (HP Labs) Eduardo Sánchez (EPFL) Guri Sohi (U. Michigan) Jean-Loup Baer (Washington Uni.) Miron Livny (U. Wisconsin) Tomas Sterling (NASA JPL) Maurice V. Wilkes (AT&T Labs) Theo Ungerer (Karlsruhe) Mario Nemirovsky (Xstreamlogic) Gordon Bell (Microsoft) Timothy Pinkston (U.S.C.) Walid Najjar (Riverside) Roberto Moreno (ULPGC) Kazuki Joe (Nara Women U.) Alex Veidenbaum (Irvine) G.R. Gao (U. Delaware) Ricardo Baeza ( Chile,Santiago) Gabby M. Silberman (CAS-IBM) Sally A. McKee (U. Utah) Evelyn Duesterwald (HP-Labs) Yale Patt (Austin) Burton Smith (Tera) Doug Carmean (Intel, Oregon) David Baker (BOPS)

24 24 International collaboration r Mobility programs (*) r Access: Transnational Access for Researchers ( ) r Access-2: Transnational Access for Researchers ( ) r Networks of Excellence r HIPEAC: High-Performance Embedded Architectures and Compilers, in evaluation ( r CoreGRID (*) : middleware for GRID, in evaluation (*) Projects of the European Center for Parallelism of Barcelona

25 25 Industrial technology transfer r European projects (IST and FET) r INTONE: Innovative OpenMP Tools for Non-experts ( ) r DAMIEN: Distributed Applications and Middleware for Industrial use of European Networks ( ) r POP: Performance Portability of OpenMP ( ) r Antitesys: A Networked Training Initiative for Embedded Systems Design ( ) r Attract international companies to establish branches or laboratories in Barcelona r EASi Engineering: S. Girona (*) r Intel Labs: R. Espasa (*) and T. Juan (*), A. Gonzalez (*), r Hewlett-Packard Labs (*) Professors of the Computer Architecture Department (full or part time dedication)

26 26 Industrial Relationships r Compaq r Sabbaticals Roger Espasa (VSSAD) Toni Juan (VSSAD) Marta Jimenez (VSSAD) r Interns Jesus Corbal (VSSAD) Alex Ramirez (WRL) r Partnerships BSSAD r HP r Sabbaticals Josep Llosa (Cambridge) r Interns Daniel Ortega Javier Zalamea r Parnerships Software Prefetching Two-Level Register File r Sun Microsystems r Interns Pedro Marcuello Ramon Canal Esther Salami Manel Fernande r Microsoft r IBM r Interns Xavi Serrano (CAS) Daniel Jimenez (CAS) 3 more people in 2001 r Parnerships Supercomputing (CIRI) Low Power Databases Binary Translation r Faculty Awards r Intel r Interns Adrian Cristal (Haifa) Alex Ramirez (MRL) Pedro Marcuello (MRL) r Parnerships Semantic Gap Smart Registers Memory Architecture for Multithreaded Processors Speculative Vector Processors r Labs in Barcelona MRL and BSSAD r Advisory Board of MRL r Xstream, Flowstorm, Kambaya r Advising committee r ST- Microelectronics r Analog Devices

27 27 Conclusions r Large (90) team with experience in many topics r Computer architecture r System software r Algorithms and applications r Good production r >100 PhD thesis r Publications in top conferences (>400) and journals (>150) r Prototypes (3) used in research laboratories r 25 professionals in industry r Long track of international collaborations r Academic r Industrial

28 28 Outline r High Performance Research Group at UPC r Centers of Supercomputing: r CEPBA r CIRI r BSC r Future context in Europe r Networks of Excellence r Large Scale Facilities r Conclusions

29 Centro Europeo de Paralelismo de Barcelona CEPBA

30 30 CEPBA DAC HPC experience Depts. CER October 1991 R+D on parallelism Training Technology transfer European context CEPBA RME, LSI, FEN, FA Computing needs

31 31 CEPBA Activities Service Service Training Technology transfer Technology transfer – T.T. Management T.T. Management – R & D R & D –24 proyectos24 proyectos  Technological expert  Developments

32 32 Service COMPAQ 12 alpha SMP 2GB Mem 32 GB Disk SGI 64 R10000 CC-NUMA 8GB Mem 360 GB Disk Access Users support Parsytec 16 Pentium II 2CPUs nodes 3 Networks Fast Ethernet Myrinet HS link 1GB Mem 30 GB Disk IBM 64 Power3 Net. SMP 32GB Mem 400 GB Disk

33 33 European Mobility Programs r Joint CEPBA - CESCA projects r Stays and access to resources

34 34 Technology Transfer R+D projects Technology Transfer Management CEPBA 23 Projects 3 cluster projects 28 Subprojects Technical management & Dissemination Technological partner & developments

35 35 Metacomputing System Tools R&D Projects Supernode II Dimemas & Paraver Identify Permpar Parmat Asra Promenvir Promenvir+ DDT Hipsid NanosApparc Phase Bonanova Sloegat BMW Sep-tools 01 Parallelization ST-ORM Intone Damien

36 36 T.T. Management 35 Proposals 28 projects  Promote proposals to EC  Technical management of projects  Dissemination CEPBA-TTN PCI-PACOS PCI-II PACOS 2

37 37 PCI-PACOS AMES CIMNE Metodos Cuantitativos Gonfiesa CESCA CESGA Hesperia Neosystems UPC-EIO Iberdrola Uitesa UPV AZTI UPC-LIM Ayto. Barcelona Uitesa UPC-EIO TGI UPM-DATSI Tecnatom UMA

38 38 PCI-II Italeco Geospace Intecs Univ. Leiden Ospedali Galliera Le Molinette Parsytec PAC EDS ENEL EDF CSR4 Reiter Kenijoki Ferrari Genias P3C Cari Verona AIS PAC Univ. Cat. Milan Volkswagen Ricardo PAC Inisel Espacio Infocarto UPC-TSC CEPBA-UPC CANDEMAT CIMNE CEPBA-UPC Intera SP Intera UK UPC-DIT CEPBA-UPC Cristaleria Española UNICAN CEPBA-UPC

39 39 HPCN TTN network


41 41 References: Technology promotion AMES AUSA INDO Ayto. BCN BCN COSIVER CEBAL-ENTEC DERBI HESPERIA Métodos Cuantitativos Mides NEOSYSTEMS QuantiSci SL Software Greenhouse Soler y Palau ST Mecánica Torres AIS CariVerona EDF-LNH EDS Italy ENEL-CRIS Ferrari Auto Genias Geospace Intecs Sistemi Intespace Italeco Kemijoki Le Molinette Ospedali Galliera Parsytec QuantiSci LTD Reiter Ricardo Volswagen CESCA CESGA CIMNE CRS4 P3C PAC RUS Catholic Univ. Milan Politecnico di Milano UNICAN UPM UMA Univ. of Leiden UPC-DIT UPC-EIO UPC-LIM UPC-OE UPC-RME UPC-TSC IPM-DATSI UPV AZTI Candemat CASA CIC Cristaleria Española Envison Gonfiesa GTD Iberdrola Indra Espacio Infocarto SAGE SENER Tecnatom TGI Uitesa European Comission

42 42 Budget (PACOS,PCI-II,TTN) r Managed ESPRIT Funding: 11.8 Mecus (total)

43 43 CEPBA-IBM Research Institute

44 44 CIRI’s mission CEPBA-IBM Research Institute (CIRI) was a research and development partnership between UPC and IBM. Established in October 2000 with an initial commitment of four years. Its mission was to contribute to the community through R&D in Information Technology. Its objectives are: Research & Development External R&D Support Technology Transfer Education CEPBA (European Center for Parallelism Barcelona) was a deep computing research center, at the Technical University of Catalonia (UPC), which was created in 1991.

45 45 Organization  Management and Technical Boards evaluate project performance and achievements and recommend future directions. r 70 people were collaborating with the Institute: r Board of Directors (4) r Institute’s Professors (10) r Associate Professors (9) r Researchers (5) r PhD Students (21) r Graduate Students (3) r Undergraduate Students (18)

46 46 r Deep Computing r Performance Tools: Numerical Codes Web Application Servers r Parallel Programming r Grid r Code Optimization r Computer Architecture r Vector Processors r Network Processors r Data Bases r Performance Optimization r DB2 Development & Testing Introduction: CIRI areas of interest

47 47 CIRI R&D Philosophy OpenMP OMPTrace activity hw counters Nested Parallelism Precedences Indirect access Performance Visualization Paraver Dimemas Collective, mapping Scheduling System scheduling Self analysis Performance Driven Proc. Alloc. Process and memory control RunTime Scheduling Dynamic load balancing Page migration MPI UTE2paraver OMPITrace Applications Steel stamping Structural analysis MGPOM MPIRE Metacomputing ST-ORM r Technology Web Computer Architecture

48 48 Resources / CPU user time evolution 164 Processors 82 GB RAM 1.8 TB Disk 8 Power3 Nodes 16-way SMP, 8 GB RAM 336 Gflop/s 9 Power4 Nodes 4-way SMP, 2 GB RAM Parallel System IBM p630

49 Barcelona Supercomputing Center Centro Nacional de Supercomputación Professor Mateo Valero Director

50 TheoryExperiment Computing & Simulation High Performance Computing Aircraft, Automobile Design Fusion Reactor, Accelerator Design, Material Science, Astrophysics Climate and Weather Modeling

51 What drives HPC ? “The Need for Speed...” Computational Needs of Technical, Scientific, Digital Media and Business Applications Approach or Exceed the Petaflops/s Range CFD Wing Simulation 512x64x256 Grid (8.3 x10e6 mesh points) 5000 FLOPS per mesh point, 5000 time steps/cycles 2.15x10e14 FLOPS CFD Full Plane Simulation 512x64x256 Grid (3.5 x10e17 mesh points) 5000 FLOPS per mesh point, 5000 time steps/cycles 8.7x10e24 FLOPS Source: A. Jameson, et al Materials Science Magnetic Material: Current: 2000 atoms; 2.64 TF/s, 512 GB Future: HDD Simulation - 30 TF/s, 2 TB Electronic Structures: Current: 300 atoms; 0.5 TF/s, 100 GB Future: 3000 atoms; 50 TF/s, 2TB Source: D. Balley, NERSC Digital Movies and Special Effects ~1e14 FLOPs per frame 50 frames/sec 90 minute movie - 2.7e19 FLOPs ~ 150 days on GFLOP/s CPUs Source: Pixar Spare Parts Inventory Planning Modelling the optimized deployment of part numbers across 100 part depots and requries: - 2x10e14 FLOP/s (12 hours on 10, 650 MHz CPUs) PetaFlop/s sust. performance (1 hour turn-around time) Industry trend to rapid, frequent modeling for timely business decision support driver higher sustained performance Source: B. Dietrich, IBM

52 Applications for Supercomputers r Aircraft/car simulations r Life Science (Proteins, Human Organs,…) r Atmosphere r Stars r Nanomaterials r Drugs r Regional/Global Climate/Weather/Pollution r High Energy Physics r Combustion r Image Processing

53 Suitable applications for massively parallel systems Source: Rick Stevens, Argonne National Lab and The University of Chicago

54 r Significant contribution to advancement of Science in Spain, enabling supercomputing capacity, scientific-technical synergies, and cost rationalization thanks to economies of scale r Powerful tool to assist research and development centers, public and private, generating impulses for a new technological environment Motivation

55 Mission  ” Investigate, develop and manage technology to facilitate the advancement of science ”

56 Objectives r Research in Supercomputing and Computer Architecture r Collaborate in R&D e-Science projects with prestigious scientific teams r Manage BSC supercomputers to accelerate relevant contributions to research areas where intensive computing is an enabling technology

57 57 Dirección de Tecnologías de la Información Dirección de Desarrollo de Negocio Dirección Administración y Finanzas Dirección de Biomedicina y Ciencias de la Vida Dirección de Química y Ciencia de los Materiales Dirección de Física e Ingeniería Dirección de Operaciones CONSEJO RECTOR Presidencia Comisión Ejecutiva PATRONATO Director AsociadoDirector I+D en I.T. e-Ciencia Gestión Comisión Científica Asesora Comité de Acceso Dirección de Astronomia y Espacio Dirección de Ciencias de la Tierra Organización del BSC

58 IT research and development projects r Continuation of CEPBA (European Center for Parallelism in Barcelona) research lines r Deep Computing r Performance Tools r Parallel programing r Grid r Code Optimization r Computer Architecture r Vector processors r Network processors

59 e-Science projects r R&D collaborations r Computational Biology r Computational Chemistry r Computational Physics r Information based Medicine

60 Management projects r Supercomputer Management r System Administration r Users support r Business Development r External Relations r Promotion r Technology Transfer r Education r Administration r Accounting and Finances r Human Resources

61 MareNostrum r PowerPC 970 FX processors (dual processors) r 4GB ECC 333 DDR memory per node r 3 networks r Myrinet r Gigabit r 10/00 Ethernet r Diskless network support r Linux cluster

62 27 Compute Racks (RC01-RC27) 162 BC chassis w/OPM and gigabit ether switch 2268 JS20+ nodes w/myrinet daughter card 7 Storage Server Racks (RS01-RS07) 40 p615 storage servers 6/rack 20 FastT 100 3/rack 20 EXP100 3/rack 4 Myrinet Racks (RM01-RM04) 10 clos myrinet switches 2 Myrinet spines 1280s 1 Gigabit Network Racks 1 Force10 E600 for Gb network 4 Cisco port for 10/100 network 1 Operations Rack (RH01) 7316-TF3 display 2 p615 mgmt nodes 2 HMC model 7315-CR2 3 Remote Async Nodes 3 Cisco BC chassis (BCIO) MareNostrum: System description

63 Processor: PowerPC 970FX

64 JS20 Processor Blade 2-way 2.2 GHz Power PC 970 SMP 4GB memory (512KB L2 cache) Local IDE drive (40 GB) 2x1Gb Ethernet on board Myrinet daughter card Blade Center 14 blades per chassis (7U) 28 processors 56GB memory Gigabit ethernet switch Blades, blade center and racks 6 chassis in a rack (42U) 168 processors 336GB memory

65 Blade centers Myrinet racks Storage servers Operations rack Gigabit switch 10/100 switches MareNostrum: Floor plan with cabling 27 racks + 1 BC chassis 4564 processors 9TB memory

66 Blade centers Myrinet racks Storage servers Operations rack Gigabit switch 10/100 switches

67 256 blades connected to 1 clos Myrinet 1280 blades connected to 5 clos Myrinet and 1 spine blades connected to 10 clos Myrinet and 2 spine x 7TB storage nodes Management rack, Force 10 Gigabit, 10/100 Cisco switches

68 Blade center racks r 6 Blade Centers per rack r 27 racks + 1 Blade Center r Cabling per rack r 84 fiber cables to myrinet switch r 6 Gb to Force10 E600 r 6 10/100 cat5 to Cisco Blade Center

69 Myrinet racks r 10 Clos 256x256 switches r Interconnect up to 256 Blades r Connect to Spine (64 ports) r 2 Spine 1280 r Interconnect up to 10 Clos 256x256 switches r Monitoring using 10/100 connection

70 Spine 1280Clos 256x Blades or Storage servers 64 to Spine 320 from Clos Myrinet racks

71 r Interconnection of Blade Centers r Used for system boot of every blade center r 212 internal network cables r 170 for blades r 42 for p615 r 76 connection available to external connection Gb Subsystem: Force 10 E600

72 Storage nodes r Total of 20 storage nodes, 20 x 7 TBytes r Each storage node r 2xP615 r FastT100 r EXP100 r Cabling per node r 2 Myrinet r 2 Gb to Force10 E600 r 2 10/100 cat5 to Cisco r 1 Serial P615 FastT100 EXP100 P615 FastT100 EXP100 P615 FastT100 EXP100

73 Management rack r Contents r BCIO r Display r 2 HMC r 2 x p615 r 3 x Cisco r 3 x 16-port Remote Async Nodes

74 74 Mare Nostrum Supercluster Myrinet or Infiniband interconnect Gigabit Ethernet Interconnect EbroTiberisRhodanus Hispania Data Server Barcino System Gateway Nilus Tarraco Visualization Cluster ePOWER User Community Three Rivers contract boundary






80 MareNostrum

81 81 Outline r High Performance Research Group at UPC r Centers of Supercomputing: r CEPBA r CIRI r BSC r Future context in Europe r Networks of Excellence r Large Scale Facilities

82 82 Erkki Liikanen Brussels, 21 April 2004

83 83 Future r Networks of Excellence r HiPEAC r InSyT r European FP VII r Technology Platforms r Large Scale Facilities r DEISA

84 84 HiPEAC Objectives r to help companies identify and select the best architecture solutions for scaling up high-performance embedded processors in the coming years r to unify and focus academic research efforts through a processor architecture and compiler research roadmap r to address the increasingly slow progression of sustained processor performance by jointly developing processor architecture and compiler optimizations r to explore novel approaches for achieving regular and smooth scaling up of processor performance with technology, and to explore the impact of a wide range of post-Moore's law technologies on processor architecture and programming paradigms.

85 85 Partners          Chalmers University, Sweden CNRS, France Delft University, The Netherlands Edinburgh University, UK Ghent University, Belgium INRIA, France University of Augsburg, Germany University of Patras, Greece University of Pisa, Italy UPC Barcelona STMicro, Switserland Infineon, Germany Ericsson, Sweden Virtutech, Sweden IBM Haïfa, Israel Kayser Italia, Italy Philips Research, The Netherlands Leading partner per country Industrial

86 86 HIPEAC Topics r Compiler optimizations r Common compiler platform r Processor architecture r Processor performance r Power-aware design r Real-time systems r Special purpose architectures r Multithreading and multiprocessors r Dynamic optimization r Common simulation platform r New processor paradigms

87 87 FET Programme r FET is the IST programme nursery of novel and emerging scientific ideas. Its mission is to promote research that is of a long-term nature or involves particularly high risks, compensated by the potential of a significant societal or industrial impact. r As such, FET is not constrained by the IST programme priorities but rather aims to open new possibilities and set new trends for future research programmes in Information Society Technologies. r FET goals will be achieved in two ways: r Via the proactive scheme: a 'top down' approach which sets the agenda for a small number of strategic areas holding particular promise for the future, and r Via the open scheme: a 'roots up' approach available to a wider range of ideas

88 88 FET: Advanced Computing Architectures r Critical mass r Coordinate research by leading European groups r Education r High quality European PhD program r Cross-pollination industry-academia r Fuel industry with European students r Feedback academia with real problems r Avoid emigration of specialists r European students recruited by European companies r Attracts IT companies to Europe

89 89 Future Emerging Technologies r Make Europe leader in microprocessor design r Support advanced research in r Computer architectures r Advanced compilers r Micro-kernel operating systems r Targets r 10+ year horizon r 10x to 100x performance increase r 10x power reduction

90 year Horizon Future r CMOS technology will continue to dominate the market for the next 25 years r However, we must be ready for CMOS alternatives Quantum computing, Molecular computing, … r Europe is well positioned in embedded processors, applications and technology r Tomorrow’s embedded processor are today’s high- performance processors r Europe need to remain the leader in the future embedded domain

91 91 FET Topics r Emerging architectures r System on a Chip architectures (SoC) r Chip Multiprocessors (CMP) r Reconfigurable logic r Supporting technology r Compiler optimizations r Operating system level integration r Key elements r Reduce design and verification effort r Develop infrastructure for system and application development r Reuse of system architectures for wide range of targets

92 92 Future r Networks of Excellence r HiPEAC r InSyst r IP projects, SCALA r European FP VII r Technology Platforms r Large Scale Facilities r DEISA

93 Muchas Gracias

Download ppt "La Investigación generadora de riqueza WINCO`05 México D.F., México, 13 de Abril de 2005 Prof. Mateo Valero, UPC, Barcelona."

Similar presentations

Ads by Google