Presentation is loading. Please wait.

Presentation is loading. Please wait.

PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč 2013-09-27.

Similar presentations


Presentation on theme: "PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč 2013-09-27."— Presentation transcript:

1 PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč

2 Intro What is the supercomputer Infrastructure Access to cluster Support Log-in

3 Why Anselm 6000 name suggestions The very first coal mine at the region The very first to have a steam engine Anselm of Canterbury

4 Early days

5 Future - Hal

6 What is a supercomputer Bunch of computers Having a lot of CPU power Having a lot of RAM Local storage Shared storage High-speed interconnected Message Process Interface

7 Supercomputer

8 Supercomputer ?!?

9

10 Anselm HW 209 compute nodes 3344 cores 15TB RAM 300TB /home 135TB /scratch Bull Extreme Computing Linux (RHEL clone)

11 Type of Nodes 180 compute nodes 23 GPU accelerated nodes 4 MIC accelerated nodes 2 fat nodes

12 General Node 180 nodes 2880 cores in total two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node 64 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node bullx B510 blade servers cn[1-180]

13 GPU Accelerated Nodes 23 nodes 368 cores in total two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node 96 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node GPU accelerator 1x NVIDIA Tesla Kepler K20 per node bullx B515 blade servers cn[ ]

14 MIC Accelerated Nodes Intel Many Integrated Core Architecture 4 nodes 64 cores in total two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node 96 GB of physical memory per node one 500GB SATA 2,5” 7,2 krpm HDD per node MIC accelerator 1x Intel Phi 5110P per node bullx B515 blade servers cn[ ]

15 Fat Node 2 nodes 32 cores in total 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node 512 GB of physical memory per node two 300GB SAS 3,5”15krpm HDD (RAID1) per node two 100GB SLC SSD per node bullx R423-E3 servers cn[ ]

16

17 Storage 300TB /home 135TB /scratch Infiniband 40 Gb/s – Native 3600 MB/s – Over TCP 1700MB/s Ethernet – 114MB/s LustreFS

18 Lustre File System Clustered OSS – object storage server MDS – meta-data server Limits in petabytes Parallel - striped

19

20

21

22

23 Stripes Stripe count – Parallel access – Mind the script processes – Stripe per gigabyte lfs setstripe|getstripe

24

25 Quotas /home – 250GB /scratch – no quota lfs quota –u hrb33 /home

26

27 Access to Anselm Internal Access Call - 4x a year – 3 rd round Open Access Call – 2x a year – 2 nd round

28 Proposals Proposals undergoing evaluation – Scientific – Technical – Economical Primary Investigator – List of co-operators

29 Login Credentials Personal certificate Signed request Credentials encrypted – Login – Password – Ssh keys – Password to the key

30 Credentials lifetime Active project(s) or affiliation to IT4Innovations Deleted 1 year after the last project Announcement – 3 months before the removal – 1 month before the removal – 1 week before the removal

31 Support Bug tracking and trouble ticketing system Documentation IT4I internal command line tools IT4I web applications IT4I android application End-user courses

32 Main Mean Request Tracker

33

34

35 Documentation https://support.it4i.cz/docs/anselm-cluster- documentation/ https://support.it4i.cz/docs/anselm-cluster- documentation/ Still evolving Changes almost every day

36

37 IT4I internal command line tools It4free Rspbs Licenses allocation Internal in-house scripts – Automation to handle the credentials – Cluster automation – PBS accounting

38 IT4I web applications Internal information system – Project management – Project accounting – User management Cluster monitoring

39

40

41 IT4I android application Internal tool Considering the release to end-users Features – News – Graphs Feature requests – Accounting – Support – Nodes allocation – Jobs status

42

43 Log-in to Anselm Finally! Ssh protocol Via anselm.it4i.cz – login1.anselm.it4i.cz – login2.anselm.it4i.cz

44

45

46

47

48

49

50

51

52

53 VNC ssh anselm –L 5961:localhost:5961 Remmina Vncviewer :5961

54 Links https://support.it4i.cz/docs/anselm-cluster- documentation/ https://support.it4i.cz/docs/anselm-cluster- documentation/ https://support.it4i.cz/ https://www.it4i.cz/

55 Questions Thank you.


Download ppt "PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč 2013-09-27."

Similar presentations


Ads by Google