Single-bunch instability preliminary studies ongoing.

Slides:



Advertisements
Similar presentations
IIAA GPMAD A beam dynamics code using Graphics Processing Units GPMAD (GPU Processed Methodical Accelerator Design) utilises Graphics Processing Units.
Advertisements

Electron-cloud instability in the CLIC damping ring for positrons H. Bartosik, G. Iadarola, Y. Papaphilippou, G. Rumolo TWIICE workshop, TWIICE.
1 Warp-POSINST is used to investigate e-cloud effects in the SPS Beam ions Electrons Spurious image charges from irregular meshing controlled via guard.
Damping ring K. Ohmi LC Layout Single tunnel Circumference 6.7 km Energy 5 GeV 2 km 35 km.
Helmholtz International Center for Oliver Boine-Frankenheim GSI mbH and TU Darmstadt/TEMF FAIR accelerator theory (FAIR-AT) division Helmholtz International.
Beam-beam studies for eRHIC Y. Hao, V.N.Litvinenko, C.Montag, E.Pozdeyev, V.Ptitsyn.
July 22, 2005Modeling1 Modeling CESR-c D. Rubin. July 22, 2005Modeling2 Simulation Comparison of simulation results with measurements Simulated Dependence.
Introduction Status of SC simulations at CERN
GRD - Collimation Simulation with SIXTRACK - MIB WG - October 2005 LHC COLLIMATION SYSTEM STUDIES USING SIXTRACK Ralph Assmann, Stefano Redaelli, Guillaume.
Sep 29 – Oct 3, 2009 LCWA 09 Linear Collider Workshop of the Americas Sept 29 – Oct 4, 2009 Damping Ring R&D updates SLAC Mauro Pivi SLAC Allison Fero.
25-26 June, 2009 CesrTA Workshop CTA09 Electron Cloud Single-Bunch Instability Modeling using CMAD M. Pivi CesrTA CTA09 Workshop June 2009.
Simulation of direct space charge in Booster by using MAD program Y.Alexahin, N.Kazarinov.
Development of Simulation Environment UAL for Spin Studies in EDM Fanglei Lin December
Details of space charge calculations for J-PARC rings.
Beam dynamics on damping rings and beam-beam interaction Dec 포항 가속기 연구소 김 은 산.
Status of Space-Charge Simulations with MADX Valery KAPIN ITEP & MEPhI, Moscow GSI, 19-Feb-2009
Simulation of direct space charge in Booster by using MAD program Y.Alexahin, A.Drozhdin, N.Kazarinov.
Oliver Boine-Frankenheim, High Current Beam Physics Group Simulation of space charge and impedance effects Funded through the EU-design study ‘DIRACsecondary.
Electron cloud in the wigglers of ILC Damping Rings L. Wang SLAC ILC Damping Rings R&D Workshop - ILCDR06 September 26-28, 2006 Cornell University.
November 14, 2004First ILC Workshop1 CESR-c Wiggler Dynamics D.Rubin -Objectives -Specifications -Modeling and simulation -Machine measurements/ analysis.
FCC electron cloud study plan K. Ohmi (KEK) Mar FCC electron cloud study meeting CERN.
Part III Commissioning. Proof of Principle FFAG (POP) study The world first proton FFAG –Commissioned in March –From 50 keV to 500 keV in 1ms. –Proof.
1 Simulations of fast-ion instability in ILC damping ring 12 April ECLOUD 07 workshop Eun-San Kim (KNU) Kazuhito Ohmi (KEK)
Optics considerations for ERL test facilities Bruno Muratori ASTeC Daresbury Laboratory (M. Bowler, C. Gerth, F. Hannon, H. Owen, B. Shepherd, S. Smith,
A U.S. Department of Energy Office of Science Laboratory Operated by The University of Chicago Office of Science U.S. Department of Energy Containing a.
Physics of electron cloud build up Principle of the multi-bunch multipacting. No need to be on resonance, wide ranges of parameters allow for the electron.
28-May-2008Non-linear Beam Dynamics WS1 On Injection Beam Loss at the SPring-8 Storage Ring Masaru TAKAO & J. Schimizu, K. Soutome, and H. Tanaka JASRI.
E-cloud studies at LNF T. Demma INFN-LNF. Plan of talk Introduction New feedback system to suppress horizontal coupled-bunch instability. Preliminary.
Midwest Accelerator Physics Meeting. Indiana University, March 15-19, ORBIT Electron Cloud Model Andrei Shishlo, Yoichi Sato, Slava Danilov, Jeff.
Cesr-TA Simulations: Overview and Status G. Dugan, Cornell University LCWS-08.
Nov 17, 2009 Webex Assessing the 3.2 km Ring feasibility: Simulation parameters for electron cloud Build-up and Instability estimation LC DR Electron Cloud.
Progress on electron cloud studies for HL-LHC A. Axford, G. Iadarola, A. Romano, G. Rumolo Acknowledgments: R. de Maria, R. Tomás HL-LHC WP2 Task Leader.
Ion effects in low emittance rings Giovanni Rumolo Thanks to R. Nagaoka, A. Oeftiger In CLIC Workshop 3-8 February, 2014, CERN.
Design of a one-stage bunch compressor for ILC PAL June Eun-San Kim.
A. Z. Ghalam, T. Katsouleas (USC) C. Huang, V. Decyk, W.Mori(UCLA) G. Rumolo and F.Zimmermann(CERN) U C L A 3-D Parallel Simulation Model of Continuous.
Beam-beam Simulation at eRHIC Yue Hao Collider-Accelerator Department Brookhaven National Laboratory July 29, 2010 EIC Meeting at The Catholic University.
3 February 2010 ILC Damping Ring electron cloud WG effort Mauro Pivi SLAC on behalf of ILC DR working group on e- cloud ILC DR Webex Meeting Jan 3, 2010.
Feedback Simulations with Amplifier Saturation, Transient and Realistic Filtering Mauro Pivi, Claudio Rivetta, Kevin Li Webex CERN/SLAC/LBNL 13 September.
Electron Cloud Experimental Plans at Cesr-TA ILCDR08 - July 10, 2009 G. Dugan Cornell Laboratory for Accelerator-Based Sciences and Education.
Collective Effect II Giuliano Franchetti, GSI CERN Accelerator – School Prague 11/9/14G. Franchetti1.
Envelope tracking as a tool for low emittance ring design
ILC DR Lower Horizontal Emittance, preliminary study
Electron Cloud, IBS, and Fast Ion Update
International Linear Collider R&D on electron cloud (SLAC)
People who attended the meeting:
Academic Training Lecture 2 : Beam Dynamics
Beam-beam effects in eRHIC and MeRHIC
ILC DR Lower Horizontal Emittance? -2
PyECLOUD and Build Up Simulations at CERN
Sabrina Appel, GSI, Beam physics Space charge workshop 2013, CERN
Multiturn extraction for PS2
U C L A Electron Cloud Effects on Long-Term Beam Dynamics in a Circular Accelerator By : A. Z. Ghalam, T. Katsouleas(USC) G. Rumolo, F.Zimmermann(CERN)
Beam-beam R&D for eRHIC Linac-Ring Option
E-cloud instability at CESR TA
SuperB General Meeting June , Perugia (Italy)
Progress of SPPC lattice design
A Head-Tail Simulation Code for Electron Cloud
ILC DR instability simulations
Beam-Beam Interaction in Linac-Ring Colliders
M. Pivi PAC09 Vancouver, Canada 4-8 May 2009
ILC DR instability simulations
ILC DR Working Group on Collective Effect: Electron Cloud
ILC Damping Ring electron cloud WG effort
ICFA Mini-Workshop, IHEP, 2017
Ions in ATF ISG-X June 20th, 2003.
ILC DR instability simulations
Frank Zimmermann, Factories’03
Physics 417/517 Introduction to Particle Accelerator Physics
Study of Beam Losses and Collimation in JLEIC
Presentation transcript:

Single-bunch instability preliminary studies ongoing. Cmad simulation code status Tracking the beam in a MAD lattice, parallel code, interaction with cloud at each element in the ring and with different cloud distribution, single-bunch instability studies, threshold for SEY, dynamic aperture tune shift … Single-bunch instability preliminary studies ongoing. Electron cloud build-up (SEY, vacuum chamber, etc.) to be added. ILC DR ILC DR bunch at injection as input ILC DR lattice MAD to track beam with an electron cloud at each element location. Electron cloud distribution in bends and straights (so far from POSINST)

Goals simulate electron cloud instability threshold in the LC DRs, LHC, SPS and storage rings Status Benchmarked against codes at CERN web page. Good agreement with existing codes (HEAD-TAIL “new simulation results 2006”) for 1 interaction point/turn. Ongoing SPS and ILC DR simulations and code benchmarking. Beta version, still good room to gain in speed: Electric fields calculation to be upgraded, parallel features to be optimized

Dynamics Beam and electron cloud Dynamics: MAD sectormap and optics functions files as input Tracking 1 bunch in the lattice by 1st (switch for 2nd) order transport maps R (and T) Particle in cell PIC code Tracking 6D beam phase space, 3D beam dynamics 3D electron cloud dynamics Apply beam-cloud interaction at each element of MAD lattice 2D forces beam-cloud, cloud-cloud computed at interaction point Electron dynamics: cloud pinching and magnetic fields included

Code benchmarking CMAD with Head-Tail code “new simulation results 2006” on web page. 1 interaction point/turn. CMAD with 32 processors in this simulation on seaborg/nersc IBM6000.

SPS simplified model, R. Thomas

SPS simplified model Emittance growth simulated with TAILHEAD considering a single electron-beam interaction point at the center of each half cell (left) and with two interaction points, close to the QD and QFF quadrupoles (right). The electron cloud responds either dynamically, or it is frozen. The electron density is 2.75e11 m-3 for the left picture and either 1e11 m-3 or 2e11 m-3 on the right. SPS simplified model.

SPS simplified model Emittance growth simulated with CMAD considering a single electron-beam interaction point at the center of each half cell (left) frozen potential after first interaction . The electron density is 2.5e11 m-3 to 7.5e11 m-3. SPS simplified model.

Improvements: Electric field computation is actually a direct node to node (slow) computation and interpolation – Room for improvement here…! Actually looking into Multigrid parallel like PHAML

Actually looking into Multigrid Poisson parallel solver like PHAML Improvements: Electric field computation is actually a direct node to node (slow) computation and interpolation – Room for improvement here…! Actually looking into Multigrid Poisson parallel solver like PHAML Suggestions for improvement?! Parallel improvement: Actually computing beam-cloud interaction with all processors for each ring element Next: assign a ring section of N elements to N processors to each compute beam-cloud interaction, then track the beam through the N elements (assume weak beam changes)

Discussion, ideas, suggestions Approximations to be added: Optional: If beta functions are identical at two elements in phase (2*pi*n), apply earlier computed cloud-potential Approximations [Longer term!] to be added: Build-up of the electron cloud initially computed until saturation for each representative of magnet class and density distributed over the ring; Then updated at each turn (assuming weak changes of the beam turn by turn).. Improvements [to be added?!]: Symplectic integrators tracking: needed ? (but ~98% phase space conserved with R and T tracking, ILCdr 1000 turns)

… (more lines cut) !**************************************** ! SET UP PARALLEL COMPUTATION ! *************************************** CALL MPI_INIT(ierror) ; ! Initialize MPI CALL MPI_COMM_SIZE(MPI_COMM_WORLD, numTasks, ierror); ! Find number of Tasks (processors) CALL MPI_COMM_RANK(MPI_COMM_WORLD, me , ierror); ! Find the ID of this task ! ****************************** ! *** START MAIN LOOP ***** CALL READ_INPUT_FILES ! READ 1) input file 2) IF(me==0) CALL BEAM_GENERATE ! GENERATE PARTICLES BEAM IF(me==0) CALL PRINT_BEAM_DISTRIBUTION ! PRINT ALL BEAM PARTICLES ON FILE DO 200 nturn = 1, nturns; CALL BEAM_DISTRIBUTION ! UPLOAD AND ORDER LONGITUDINALLY-SLICED BEAM DISTRIBUTION CALL PARTITION_BEAM_PARTICLES ! PREPARE TO ASSIGN BEAM PARTICLES TO EACH PROCESSOR CALL MPI_BCAST(xxx, count, MPI_REAL, root, MPI_COMM_WORLD, ierr) ; ! PROCESSOR 0 PASSES THE PARTICLE BEAM DO 100 ielement = 1, NumberOfElements ! LOOP OVER NumberOfElements = TOTAL NUMBER OF NON-ZERO ELEMENTs CALL GRID_SETUP(ielement) ; CALL REMOVE_PARTICLES_EXCEEDING_APERTURE(ielement) IF(justtrackbeam==1) then; CALL BEAM_TRACK_PARALLEL(ielement,ielement); goto 75; endif; ! TRACK BEAM WITHOUT CLOUD IF( ELENGHT(ielement) == 0.0 .and. nturn > 1) goto 70 ! SKIP ZERO-LENGTH ELEMENTS IF(ifrozen==1 .and. nturn > 1) then; CALL FROZEN_ELECTRIC_FIELD(ielement,frozen3Dcloud) ; goto 70; endif; CALL BEAM_ON_3DGRID(ielement,grid3Dbeam) CALL ELECTRON_CLOUD_DISTRIBUTION(ielement) CALL PARTITION_ELECTRON_CLOUD ; countel = nemax * 6 ; CALL MPI_BCAST(xxxel, countel, MPI_REAL, root, MPI_COMM_WORLD, ierr); ! PROCESSOR 0 PASSES THE PARTICLE CALL ELECTRIC_FIELD(ielement,grid3Dbeam,grid2Dcloud,frozen3Dcloud) ; 70 CONTINUE CALL BEAM_TRACK_PARALLEL(ielement,ielement); 75 CONTINUE ! *** PROCESSOR me=0 GATHER XXX BACK FROM ALL PROCESSORS *** !!!!!!!!!!! THIS IS SLOW IMPROOVE HERE !!!!!!! … (more lines cut) CALL MPI_SEND(xxxsend, isendcount, MPI_REAL, root, itag, MPI_COMM_WORLD, ierr) CALL MPI_RECV(xxxrecv, iksendcount, MPI_REAL, k, itag, MPI_COMM_WORLD, istatus, ierr) !!!!!!!!!!! THIS IS SLOW IMPROOVE HERE !!!!!!! END CALL FLUSH_OUT DEALLOCATE(grid3dbeam,grid2dcloud) 100 CONTINUE IF(me==0) CALL PRINT_BEAM_DISTRIBUTION IF(me==0) CALL SYNCHROTRON_OSCILLATION 200 CONTINUE CALL COMPUTE_AND_PRINT_BEAM_STATISTICS(ielement-1, numfile); ! print on a file the standard deviations CALL close_files ! **** END MAIN LOOP **** !*******************************

SPS simplified model: synchrotron oscillations. synchrotron tune Qs=0