Presentation is loading. Please wait.

Presentation is loading. Please wait.

High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland.

Similar presentations


Presentation on theme: "High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland."— Presentation transcript:

1 high throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland

2 2 Intel Confidential Legal Notices and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A "Mission Critical Application" is any application in which failure of the Intel® product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. This document contains information on products in the design phase of development. Intel® processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to: Learn About Intel® Processor Numbers.Learn About Intel® Processor Numbers All products, computer systems, dates and figures specified are preliminary based on current expectations, and are subject to change without notice. Intel, the Intel logo, Intel Atom and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2013 Intel Corporation.

3 IPAG Europe Labs in Europe Jülich Leuven Paris Geneva Barcelona HPC System Architecture Sniper Simulator High Throughput Comp, Big Data Scalable Tools, Progr. Models Workloads, DC Analytics Edinburgh Data Science Algorithms 3

4 What is the HTCC? 3 year collaboration initiated in 2015 between Intel and the LHCb experiment at CERN Focus is on developing the next DAQ using next generation Intel® technology, mostly: KNL, Intel® Omni-Path, Intel® XEON FPGA, and Intel® GbE cards Other (larger) experiments such as ATLAS and CMS are also evaluating Intel® technologies and are paying close attention to the results produced by the HTCC. 4

5 Who is the HTCC? : Team Members Niko Neufeld (Director) Omar Awile Christian Färber Sébastien Valat Rainer Schwemmer Balázs Vőneki Paolo Durante Olof Bärring Daniel Campora Jon Machen (Intel®) 5

6 6

7 Obligatory Detector Graphic 7

8 8 What are the challenges for 2019? Luminosity will increase resulting in many more collisions A 100 Gbit/s data acquisition card Event-builder PCs handling 400 Gbit/s A high throughput / high link-load network with 500 x 100 Gbit/s ports Online data must be processed faster through acceleration or parallelization Networking and compute resources need to adapt to differing workload requirements (online vs offline)

9 LHCb TDAQ Architecture Using Intel® Detector front-end electronics Eventbuilder network Eventbuilder PCs Eventfilter Farm ~ 80 subfarms Eventfilter Farm ~ 80 subfarms UX85B Point 8 surface subfarm switch TFC 500 x 100 Gbit/s subfarm switch Online storage Clock & fast commands 8800 x Versatile Link 8800 x Versatile Link throttle from PCIe40 Clock & fast commands Intel® Xeon Intel® Xeon + FPGA w/ Intel ® Omni-Path Intel® KNL & Intel® OPA Intel® Omni-Path And / Or 100 GbE Intel® Xeon Xeon + FPGA KNL 3D Xpoint 9

10 Intel® Xeon Focus points for LHCb: Virtualization direct application targeted features: Cache Monitoring and Cache Allocation Technologies (CMT/CAT) Memory Bandwidth Monitoring (MBM) Posted interrupts Page modification logging Cost : Performance ratio Power savings is a lesser concern 10

11 Intel® Xeon Phi Applicability to LHCb: While LHCb workloads are still largely serial, parallelization of existing algorithms and parallel execution of separate algorithms are both being explored. Promising vectorization and parallelization results have already been achieved on the new Knights Landing Architecture, but exploration is still in its infancy. Future generations of the Xeon Phi may also lend themselves to event- building. 11 *Figure Credit: Omar Awile, HTCC

12 Intel® Xeon + FPGA Interest to LHCb: Algorithmic acceleration: 35x performance gain over Xeon alone on Cherenkov photon reconstruction Near term hardware upgrades anticipated. Further improvements (likely as much as 64x) are expected as HTCC gains access to units that include Arria 10 and increased interconnect bandwidth. Implementation targets: Event building: Data decompression and reformatting Event filtering: Tracking and particle identification 12 *Figure Credit: Christian Färber, HTCC

13 Intel® Omni-Path & 100 GbE Benefits of Collaboration: LHCb throughput requirements: ~ 80 Tbps, full-duplex HTCC is building a 16 node cluster for testing and benchmarking of both Intel® Boulder Rapids and Intel® Omni-Path as well as other non-Intel® fabric technologies Access to Intel cluster at Swindon HTCC is in a unique position to evaluate the strengths of Omni-Path against the LHCb problem domain Refinements to Omni-Path and potential improvements to the LHCb event building process have already been achieved. More are anticipated. 13 *Figure Credit: Sébastien Valat, Balázs Vőneki HTCC

14 Intel® 3D Xpoint™ Technology Fit: Still awaiting hardware, but potential is seen in each of the following areas: Cost-effective memory to facilitate better process parallelization RAM disk replacement A “check-point” or application boot cache approach 14

15 Great work from great partners. Much more to come. 15 Summary: LHCb & Intel

16 16 Thank You!


Download ppt "High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland."

Similar presentations


Ads by Google