Download presentation

Presentation is loading. Please wait.

Published byJames Bradshaw Modified over 2 years ago

1
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop F. Di Renzo (1) and L. Scorzato (2) in collaboration with C. Torrero (3) (1) University of Parma. INFN Parma – MI11 (2) ECT* Trento. INFN Parma – MI11 (3) University of Bielefeld

2
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop NSPT : a tool for getting more from Lattice Perturbation Theory Despite the fact that in PT the Lattice is in principle a regulator like any other, it is in practice a very ugly one… As a matter of fact, the Lattice is mainly intended as a non-perturbative regulator. Still, LPT is something you can not actually live without! >> In many (traditional) playgrounds LPT has often been replaced by non-perturbat. methods: renormalization constants, Symanzik improvement coefficients,... methods: renormalization constants, Symanzik improvement coefficients,... >> On top of all this, LPT converges badly and one often tries to make use of Boosted PT (Parisi, Lepage & Mackenzie). This should be carefully assessed. Boosted PT (Parisi, Lepage & Mackenzie). This should be carefully assessed. >> The key point: LPT is substantially more involved than other (perturb.) regulators. LPT is really cumbersome and usually (diagrammatic) computations are 1 LOOP. LPT is really cumbersome and usually (diagrammatic) computations are 1 LOOP. 2 LOOPS are really hard and 3 LOOPS almost unfeasible. 2 LOOPS are really hard and 3 LOOPS almost unfeasible. >> With NSPT we can compute to HIGH LOOPS! We can assess convergence properties and truncation errors of the series. With this respect we think LPT properties and truncation errors of the series. With this respect we think LPT should not be necessarily regarded as a second choice. should not be necessarily regarded as a second choice. >> In the following we will mainly focus on (quark bilinears) renormalization constants. constants.

3
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Outline We saw some motivations...We saw some motivations... Some technical details: just a flavour of what NSPT is and what the computational demands are.Some technical details: just a flavour of what NSPT is and what the computational demands are. A little bit on the status of renormalization constants for Lattice QCD: quarks bilinears for the WW (Wilson gauge and Wilson fermions) action.A little bit on the status of renormalization constants for Lattice QCD: quarks bilinears for the WW (Wilson gauge and Wilson fermions) action. Current Lattice QCD projects can be interested in different combinations of gauge/fermion actions: take into account Symanzik gauge action and Clover fermions as well!Current Lattice QCD projects can be interested in different combinations of gauge/fermion actions: take into account Symanzik gauge action and Clover fermions as well! apeNEXT can do the job (first configurations production just started)apeNEXT can do the job (first configurations production just started)

4
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop From Stochastic Quantization to NSPT Actually NSPT comes almost for free from the framework of Stochastic Quantization (Parisi and Wu, 1980). From the latter originally both a non-perturbative alternative to standard Monte Carlo and a new version of Perturbation Theory were developed. NSPT in a sense interpolates between the two. Now, the main assertion is very simply stated: asymptotically Stochastic Quantization In the previous formula, is a gaussian noise, from which the stochastic nature of the equation originates. Given a field theory, Stochastic Quantization basically amounts to giving to the field an extra degree of freedom, to be thought of as a stochastic time in which an evolution takes place according to the Langevin equation

5
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop (Numerical) Stochastic Perturbation Theory Since the solution of Langevin equation will depend on the coupling constant of the theory, look for the solution as a power expansion If you insert the previous expansion in the Langevin equation, the latter gets translated into a hierarchy of equations, each for each order, each dependent on lower orders. Now, also observables are expanded and we get power expansions from Stochastic Quantizations main assertion, e.g. Just to gain some insight (bosonic theory with quartic interaction): you can solve by iteration! Diagrammatically... + λ + λ 2 ( +... ) + O(λ 3 ) + 3 λ ( + ) + O(λ 2 )... And this is a propagator...

6
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop NSPT (Di Renzo, Marchesini, Onofri 94) simply amounts to the numerical integration of these equations on a computer! For LGT this means integrating (Batrouni et al 85) Numerical Stochastic Perturbation Theory where (Langevin eq. has been formulated in terms of a Lie derivative and Euler scheme) and everything should be intended as a series expansion, i.e. one has to plug in

7
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop NSPT is not so bad to put on a computer! In particular on a parallel one (APE family)... Numerical Stochastic Perturbation Theory From fields to collections of fields order nFrom fields to collections of fields order n From scalar operations to order by order operations order n 2From scalar operations to order by order operations order n 2 Not too bad from the parallelism point of view!Not too bad from the parallelism point of view! APE100 - Quenched LQCD (Now on PCs! Now also with Fadeev-Popov, APE100 - Quenched LQCD (Now on PCs! Now also with Fadeev-Popov, but no ghosts!). but no ghosts!). 00-now - APEmille - Unquenched LQCD (WW action): Dirac matrix easy to invert (it is PT, after all!)00-now - APEmille - Unquenched LQCD (WW action): Dirac matrix easy to invert (it is PT, after all!) apeNEXT - we have resources to undertake sistematic investigation of different actions apeNEXT - we have resources to undertake sistematic investigation of different actions...

8
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop One wants to work at zero quark mass in order to get a mass-independent scheme. project on the tree level structure We work in the RI-MOM scheme: compute quark bilinears operators between (off-shell p) quark states and then amputate to get G functions where the field renormalization constant is defined via Renormalization conditions read Renormalization constants : our state of the art Despite the fact that there is no theoretical obstacle to computing log-div RC in PT, on the lattice one tries to compute them NP. Popular (intermediate) schemes are RI-MOM (Rome group) and SF (alpha Coll).

9
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Computation of Renormalization Constants We compute everything in PT. Usually divergent parts (anomalous dimensions) are easy, while fixing finite parts is hard. In our approach it is just the other way around! RI-MOM is an infinite-volume scheme, while we have to perform finite V computations! Care must be taken of this (crucial) aspect when dealing with logs. We actually take the s for granted. See J.Gracey (2003): 3 loops! We take small values for (lattice) momentum and look for hypercubic symmetric Taylor expansions to fit the finite parts we want to get. We know which form we have to expect for a generic coefficient (at loop L) - Wilson gauge – Wilson fermion (WW) action on 32 4 and 16 4 lattices. - Gauge fixed to Landau (no anomalous dimension for the quark field at 1 loop level). - n f = 0 (both 32 4 and 16 4 ); 2, 3, 4 (32 4 ). - Relevant mass countertem (Wilson fermions) plugged in (in order to stay at zero quark mass).

10
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Computation of Renormalization Constants Always keep in mind the master formula!... and the definition of Z q The O(p) are the quantities to be actually computed. They are made out of convenient The O(p) are the quantities to be actually computed. They are made out of convenient inversions of the Dirac operator on sources (we work out everything in mom space!) inversions of the Dirac operator on sources (we work out everything in mom space!) If one computes ratios of Os one obtains ratios of Zs, in which in particular Z q cancels If one computes ratios of Os one obtains ratios of Zs, in which in particular Z q cancels out. Convenient ratios are finite. out. Convenient ratios are finite. Z v (Z a ) can be computed by taking convenient ratios of O v (O a ) and S -1, thus eliminating Z v (Z a ) can be computed by taking convenient ratios of O v (O a ) and S -1, thus eliminating Z q. They can also be computed taking ratios of O v (O a ) and the corresponding conserv. Z q. They can also be computed taking ratios of O v (O a ) and the corresponding conserv. currents. currents. Z s (Z p ) requires to subtract logs in order to obtain finite quantities. This needs care. Z s (Z p ) requires to subtract logs in order to obtain finite quantities. This needs care. Once one is left with finite quantities one can extrapolate to zero the irrelevant terms Once one is left with finite quantities one can extrapolate to zero the irrelevant terms which go away with powers of pa (these powers comply to hypercubic simmetry) which go away with powers of pa (these powers comply to hypercubic simmetry)

11
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Ratios of bilinears Zs are finite and safe to compute! Z p /Z s Z v /Z a Good n f dependence

12
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Z a and Z v Z a and Z v

13
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Resumming Z p /Z s (to 4 loops!) One can compare to NP results from SPQ CD R Resumming Z p /Z s (to 4 loops!) One can compare to NP results from SPQ CD R We can now have numbers for Z a and Z v. We resum n f =2 β=5.8 using different coupling definitions: Z p /Z s = 0.77(1) Results less and less dependent on the order at fixed scheme and less and less dependent on the scheme at higher and higher order. Z p /Z s and Z s /Z p quite well inverse of each other. Results less and less dependent on the order at fixed scheme and less and less dependent on the scheme at higher and higher order. Z p /Z s and Z s /Z p quite well inverse of each other. Compare to SPQ CD R result Z p /Z s = 0.75(3) Compare to SPQ CD R result Z p /Z s = 0.75(3)

14
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Resumming Z a and Z v (to 4 loops!) One can compare to NP results from SPQ CD R Resumming Z a and Z v (to 4 loops!) One can compare to NP results from SPQ CD R Z a = 0.79(1) Z v = 0.70(1) SPQ CD R result Z a = 0.76(1) and Z v = 0.66(2) SPQ CD R result Z a = 0.76(1) and Z v = 0.66(2) Keep in mind chiral extrapolation! Keep in mind chiral extrapolation!

15
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Remember: n f enters as a parameter and you would like to fit the n f dependence.Remember: n f enters as a parameter and you would like to fit the n f dependence. APEmille (some work started on Clover) is not enough (~10 months for a given n f )APEmille (some work started on Clover) is not enough (~10 months for a given n f ) Renormalization constants : what next? In these days Lattice QCD has great opportunities in really performing first principle computations. There are nevertheless a variety of options as for the choice of the action On top of Wils/Wils (Wilson-gauge/Wilson-fermion) action we want to take into account the possible combinations of Wilson gauge action Wilson gauge action tree-level Symanzik gauge action tree-level Symanzik gauge action (unimproved) Wilson fermion action (unimproved) Wilson fermion action (Wilson improved) Clover fermion action (Wilson improved) Clover fermion action Results will also apply to twisted mass (renormalization conditions in the massless limit)

16
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Why was in the end NSPT quite efficient? apeNEXT can do the job! The APEmille code was not so brilliant on apeNEXT...The APEmille code was not so brilliant on apeNEXT but we can optimize a little bit. For example we can make use of prefetching queues. We also have sofan at hand. Ok, then the cost for one n f is ~2 months.... but we can optimize a little bit. For example we can make use of prefetching queues. We also have sofan at hand. Ok, then the cost for one n f is ~2 months. You do not have to store fields, but collections of fields, on which the most intensive FPU operations are order-by-order multiplications (remember the observation on parallelism!)You do not have to store fields, but collections of fields, on which the most intensive FPU operations are order-by-order multiplications (remember the observation on parallelism!) This is a situation in which in there is a reasonable hope to perform well on a disegned-to-number-crunch machine...Keep register file and pipelines busy!This is a situation in which in there is a reasonable hope to perform well on a disegned-to-number-crunch machine...Keep register file and pipelines busy! (in Parma they would say fitto come il rudo... this is packed rubbish...) (in Parma they would say fitto come il rudo... this is packed rubbish...) This was traditionally quite easy on APE100 and APEmille (program memory and data memory are not the same)This was traditionally quite easy on APE100 and APEmille (program memory and data memory are not the same)

17
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop ff7db6 | break pipe | | !! first staple down | !! first staple down | | Faux0[0] = Umu[link_c+o4] | Faux0[0] = Umu[link_c+o4] | Faux = Faux0[o4bis]^+ | Faux = Faux0[o4bis]^+ | Faux0[0] = Umu[link_c+o5]^+ | Faux0[0] = Umu[link_c+o5]^+ | Faux = AUmultU11(Faux,Faux0[0]) | Faux = AUmultU11(Faux,Faux0[0]) | Faux = AUmultU11(Faux,Umu[link_c+o6]) | Faux = AUmultU11(Faux,Umu[link_c+o6]) | F = F + Faux | F = F + Faux | % C: 1343 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 4/4 275/17 110/ % C: 1343 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 4/4 275/17 110/2 ff84dd | break pipe | | !! second staple up | !! second staple up | | Faux0[0] = Umu[link_c+o8]^+ | Faux0[0] = Umu[link_c+o8]^+ | Faux = AUmultU11(Umu[link_c+o7],Faux0[0]) | Faux = AUmultU11(Umu[link_c+o7],Faux0[0]) | Faux0[0] = Umu[link_c+o9]^+ | Faux0[0] = Umu[link_c+o9]^+ | Faux = AUmultU11(Faux,Faux0[0]) | Faux = AUmultU11(Faux,Faux0[0]) | F = F + Faux | F = F + Faux | % C: 1353 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 3/3 275/17 110/ % C: 1353 F: 0 M: 0 X: 2 L: 0 I: 4 IQO: 3/3 275/17 110/2... ffc75d | do i = 0, Sp_Vm | Faux = logU(Umu[j1]) | Faux = logU(Umu[j1]) | Faux = Faux - MomNull[Dir] | Faux = Faux - MomNull[Dir] | Faux = stricToA(Faux) | Faux = stricToA(Faux) | Umu[j1] = expA(Faux) | Umu[j1] = expA(Faux) | j1 = j1 + 1 | j1 = j % C: 2855 F: 0 M: 0 X: 31 L: 10 I: 32 IQO: 2/2 381/39 111/ % C: 2855 F: 0 M: 0 X: 31 L: 10 I: 32 IQO: 2/2 381/39 111/9... ffb599 | break pipe | | !! Enforcing unitary constraint... | !! Enforcing unitary constraint... | | Faux = logU(Faux) | Faux = logU(Faux) | Faux = stricToA(Faux) | Faux = stricToA(Faux) | MomNull[Dir] = MomNull[Dir] + Faux | MomNull[Dir] = MomNull[Dir] + Faux | | Umu[link_c] = expA(Faux) | Umu[link_c] = expA(Faux) | % C: 2856 F: 0 M: 0 X: 31 L: 10 I: 31 IQO: 1/1 325/37 230/ % C: 2856 F: 0 M: 0 X: 31 L: 10 I: 31 IQO: 1/1 325/37 230/18

18
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop /AUseries_fact -> "expA" "(" AUseries_expr^a ")" { temporary AUseries res, aux, auxR temporary AUseries res, aux, auxR temporary su3 a_ temporary su3 a_ temporary complex jnk, jnk1 temporary complex jnk, jnk1 res = SetUpA aux = a res = res + aux jnk = complex_1 jnk1 = complex_1 /for n = 2 to ordine { auxR = SetUpA auxR = SetUpA queue = a.AU.[0] queue = a.AU.[0] /for i = 1 to (ordine-n) { /for i = 1 to (ordine-n) { a_ = queue a_ = queue /for j = (n-1) to (ordine-i) { /for j = (n-1) to (ordine-i) { auxR.AU.[i+j-1] = auxR.AU.[i+j-1] + a_ * aux.AU.[j-1] auxR.AU.[i+j-1] = auxR.AU.[i+j-1] + a_ * aux.AU.[j-1] } queue = a.AU.[i] queue = a.AU.[i] } a_ = queue a_ = queue auxR.AU.[ordine-1] = auxR.AU.[ordine-1] + a_ * aux.AU.[n-2] auxR.AU.[ordine-1] = auxR.AU.[ordine-1] + a_ * aux.AU.[n-2] jnk1 = jnk1 + complex_1 jnk1 = jnk1 + complex_1 jnk = jnk/jnk1 jnk = jnk/jnk1 /for i = (n-1) to (ordine-1) { /for i = (n-1) to (ordine-1) { res.AU.[i] = res.AU.[i] + jnk * auxR.AU.[i] res.AU.[i] = res.AU.[i] + jnk * auxR.AU.[i] } aux = auxR aux = auxR} res.U0 = (1.0,0.0) res.U0 = (1.0,0.0) rreturn res rreturn res } Here is an example taken from bulk computations (going from a power-expanded A μ field to a power-expanded U μ field)

19
Francesco Di Renzo GGI - Firenze - February 9, 2007 apeNEXT workshop Conclusions NSPT is by now quite a mature technique. Computations in many different frameworks can be (and are actually) undertaken.NSPT is by now quite a mature technique. Computations in many different frameworks can be (and are actually) undertaken. More results for renormalization constants are to come for different actions: on top of Wils-Wils, also Wils-Clov, Sym-Wils and Sym-Clov. apeNEXT can manage the job!More results for renormalization constants are to come for different actions: on top of Wils-Wils, also Wils-Clov, Sym-Wils and Sym-Clov. apeNEXT can manage the job! Other developments are possible... (expansions in the chemical potential?)Other developments are possible... (expansions in the chemical potential?) So, if you want... stay tuned!So, if you want... stay tuned!

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google