Download presentation
Presentation is loading. Please wait.
1
For a Fistful of Tips & Tricks
2
Go ahead … make my day!
3
To Eric Smith and his Nonpareil emulators!
Dedication To Eric Smith and his Nonpareil emulators!
4
Calculating Increments for Derivatives
First and Second derivatives of f(x) are approximated using the central-difference formulas: f’(x) = (f(x+h) – f(x-h))/2h f’’(x) = (f(x+h) – 2f(x) + f(x-h))/h^2 To calculate increment h, replace: If |x|>1 Then h=0.01*x Else h=0.01 With h = 0.01 * (1 + |x|)
5
Newton Methods Replace Newton’s method basic equation: With:
𝒙=𝒙− 𝒇 𝒙 𝒇 ′ (𝒙) With: 𝒙=𝒙− 𝒇 𝒙 𝒇 ′ (𝒙) [ 𝒇 ′ 𝒙 𝟐 +𝒆𝒑𝒔] Where eps is a small positive value. Denominator is always greater than zero. Good for when the root is at f’(x)=0 (at a minimum, maximum, or saddle point).
6
Testing Convergence (and Calculating Errors)
Option 1 has condition as: 𝒙 𝟐 − 𝒙 𝟏 < ∆ 𝒅 Option 2 has condition as: | 𝒙 𝟐 − 𝒙 𝟏 𝒙 𝟐 |< ∆ 𝒓 Enhanced option 2: | 𝒙 𝟐 − 𝒙 𝟏 𝒙 𝟐 𝒙 𝟐 𝟐 +𝒆𝒑𝒔 |< ∆ 𝒓
7
Finding Primes EXPORT IsPrime(n) WHILE i * i <= n DO BEGIN
LOCAL i; IF n < 4 OR n MOD 2 == 0 OR n MOD 3 == 0 THEN RETURN 0; END; i := 5; WHILE i * i <= n DO IF n MOD i == 0 OR n MOD (i + 2) == 0 THEN RETURN 0; END; i := i + 6; RETURN 1;
8
Finding Primes (Cont.) EXPORT GetPrimes(first, last) BEGIN
LOCAL i, counter, mat; counter:=0; mat:=MAKEMAT(0, 1, 1); IF first == 2 THEN counter:=1; REDIM(mat,{1,1}); mat(1,1):=first; END; IF first MOD 2 == 0 THEN first:=first + 1; FOR i FROM first TO last STEP 2 DO IF IsPrime(i) > 0 THEN counter:=counter+1; REDIM(mat,{counter,1}); mat(counter,1):=i; END; RETURN mat;
9
Finding Primes (Cont.)
10
Finding Primes (Cont.)
11
Solovay–Strassen Primality Test
Efficient test for prime numbers. Does not take modulo of all lower primes! Requires special calculations for raising numbers to high powers and special versions of the modulo operator. To test n for being a prime: Repeat the following steps k times Choose ‘a’ as a random number in the range [1, n-1] Calculate x = L(a/n) which is the Legendre symbol. If x is 0 or a^(n-1) <> x mod n then n is NOT a prime n is (probably) a prime.
12
Solovay–Strassen Primality Test (Cont.)
File “Solovay Primes for the HP Prime.txt” implements Solovay-Strassen test. File has the following functions: Function Modulo to perform special modulo exponentiation operations. Function calculateJacobian calculates the Jacobian symbol. Function Solovay(p, iteration) determines if its argument p is a prime number. Function SolovayPrimes(first, last, iteration) returns a single-column matrix containing primes specified by the range [first, last] and using a specified number of iterations when calling function Solovay.
13
Solovay–Strassen Primality Test (Cont.)
14
Solovay–Strassen Primality Test (Cont.)
15
Miller–Rabin Primality Test
Efficient test for prime numbers. Similar to and better than the Solovay-Strassen test. Requires special calculations for raising numbers to high powers and special version of the modulo operator.
16
Miller–Rabin Primality Test Algorithm
if x = 1 then return false if x = n - 1 then bflag = true exit loop next j if !bflag then return false # composite next i return true # probably prime Given number n to test for being a prime Given k iterations (usually 5) Set n - 1 as 2^r*d with d odd by factoring powers of 2 from n - 1 for i=1 to k a = random number in the range [1, n - 1] x = a^d mod n bflag = false if x <> 1 and x <> n - 1 then for j = 1 to r-1 x = x^2 mod n
17
Miller-Rabin Primality Test (Cont.)
File “Miller-Rabin Primes for the HP Prime.txt” implements Miller-Rabin test. File has the following functions: Function Modulo to perform special modulo exponentiation operations. Function MulMod to multiply modulo values. Function Miller(p, iteration) determines if its argument p is a prime number. Function MillerPrimes(first, last) returns a single-column matrix containing primes specified by the range [first, last].
18
Miller–Rabin Primality Test (Cont.)
19
Miller–Rabin Primality Test (Cont.)
20
Normalizing Regression Data
Use mean and sdev values to get small ranges around [-1, 1]: 𝒙 𝒊 −𝒎𝒆𝒂𝒏 𝒔𝒅𝒆𝒗 Use minimum and maximum values: 𝒙 𝒊 − 𝒙 𝒎𝒊𝒏 𝒙 𝒎𝒂𝒙 − 𝒙 𝒎𝒊𝒏 𝒃−𝒂 +𝒂 [𝑎, 𝑏] a=1 and b=2 are good candidate values. Taking the logarithm of the normalized range is an option. Use minimum and offset values: 𝒙 𝒊 − 𝒙 𝒎𝒊𝒏 +𝑶𝒇𝒇𝒔𝒆𝒕
21
Normalizing Regression Data
Use minimum and offset with ln(x): 𝒍𝒏 𝒙 𝒊 − 𝒙 𝒎𝒊𝒏 +𝑶𝒇𝒇𝒔𝒆𝒕 good for Offset = exp(1) Use reciprocals: 1/X OR 1+1/X for X>0 Or 1+1/(1+X) for X>=0
22
Bessel Function Jn(x) Best cool trick for accurately calculating Jn(x) is found in the HP-65 Math Pac 2. Uses reverse recursive calculations for a non- normalized series, Tn(x), that parallels values of Bessel functions. There can be virtually an infinite set of non-normalized series! Calculates a normalizing factor, K, using the values of the non-normalized series. Calculate the desired Jn(x) value as Tn(x)/K.
23
Bessel Function (Cont.)
Use concepts from: 𝑱 𝒏−𝟏 𝒙 = 𝟐𝒏 𝒙 𝑱 𝒏 𝒙 − 𝑱 𝒏+𝟏 (𝒙) 𝑱 𝟎 𝒙 +𝟐 𝒊=𝟏 ∞ 𝑱 𝟐𝒊 (𝒙) =𝟏 These equations also work on the values of the non- normalized series, except in the second equation the value 1 is replaced by the normalizing factor needed.
24
Bessel Function (Cont.)
To calculate Jn(x), calculate m as: 𝒎=𝑰𝑵𝑻{𝟏+𝟑 𝒙 𝟏 𝟏𝟐 +𝟗 𝒙 𝟏 𝟑 +𝐦𝐚𝐱(𝒏,𝒙)} Set 𝑇 𝑚 𝑥 and 𝑇 𝑚+1 (𝑥) as non-normalized values for Bessel functions: 𝑻 𝒎+𝟏 (𝒙)=𝟎 and 𝑻 𝒎 𝒙 =𝒂, 𝑎𝑛 𝑎𝑟𝑏𝑖𝑟𝑡𝑎𝑟𝑦 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 Calculate lower members of T using: 𝑻 𝒌−𝟏 𝒙 = 𝟐𝒌 𝒙 𝑻 𝒌 𝒙 − 𝑻 𝒌+𝟏 (𝒙), for k = m to 1
25
Bessel Function (Cont.)
Calculate normalizing factor K as: 𝑲= 𝑻 𝟎 𝒙 +𝟐 𝒊=𝟏 𝒑 𝑻 𝟐𝒊 (𝒙) Where p = INT(m/2) Calculate Jn(x) using: 𝑱 𝒏 𝒙 = 𝑻 𝒏 (𝒙) 𝑲 Proceedings include files “HP71B Recursive Bessel.txt” and “Recursive Bessel for HP Prime.txt” to support Bessel Jn(x) calculations.
26
Intermission!
27
Gamma Function Approximation
Most accurate approximations use polynomials for 1/ 𝚪(x): 𝟏 𝜞(𝒙) = 𝒊=𝟎 𝒏 𝒄 𝒊 𝒙 𝒊 for x in range of [1, 2] and using: 𝚪 𝒙+𝟏 =𝒙 𝚪 𝒙 To bring original argument down to the range of [1, 2].
28
Gamma Function Approximation
Stirling’s approximation is the proverbial granddaddy of gamma approximation: 𝚪 𝒙+𝟏 = 𝟐𝝅𝒙 ( 𝒙 𝒆 ) 𝒙 Other prominent approximations based on Stirling are those by Lanczos and by Sprouge. The approximations of Lanczos and of Spouge use coefficients that can be calculated. The coefficients for the Sprouge approximation are much easier to calculate than the ones for the Lancsoz approximation.
29
Gamma Function Approximation (Cont.)
Sprouge’s approximation: 𝚪 𝒙+𝟏 = (𝒙+𝒂) 𝒛+𝟏/𝟐 𝒆 −(𝒛+𝒂) 𝒄 𝟎 + 𝒌=𝟏 𝒂−𝟏 𝒄 𝒌 𝒙+𝒌 𝒄 𝟎 = 𝟐𝝅 𝒄 𝒌 = (−𝟏) 𝒌−𝟏 𝒌−𝟏 ! (𝒂−𝒌) 𝒌−𝟏/𝟐 𝒆 𝒂−𝒌 for k=1, 2, …, a-1 Where a>2. Error is bound by 𝑎 − (2𝜋) −(𝑎+ 1 2 )
30
Gamma Function Approximation (Cont.)
10 REM SPROUGE APPROX. FOR GAMMA(X) 20 INPUT "ENTER X? ";X 30 X=X-1 40 S=SQR(2*PI) 50 F=1 60 C8=1 70 A=12.5 80 FOR K=1 TO A-1 90 IF K > 2 THEN F = F * (K-1)
31
Gamma Function Approximation (Cont.)
100 C = C8/F 110 C = C * EXP(A - K) 120 C = C * (A - K) ^ (K - 1 / 2) 130 S = S + C / (X + K) 140 C8=-C8 150 NEXT K 160 G=(X + A) ^ (X + 1 / 2) / EXP(X + A) * S 170 DISP "GAMMA(";X+1;")=";G 180 END
32
Gamma Function Approximation (Cont.)
Two approximations that require modest calculation effort. Robert Windschitl’s approximation, suggested in 2002, for gamma (by reducing an extended Stirling formula) is: 𝚪 𝒙 = 𝟐𝝅 𝒙 [ ( 𝒙 𝒆 ) 𝒙 𝒔𝒊𝒏𝒉 𝟏 𝒙 ] 𝒙 Good approximation by Gergö Nemes in Gives same number of exact digits as Windschitl’s approximation: 𝚪(𝐱) = 𝟐𝝅 𝒙 ((𝒙+ 𝟏 𝟏𝟐𝒙− 𝟏 𝟏𝟎𝒙 )/𝒆) 𝒙
33
Gamma Function Approximation (Cont.)
Nemes has another equation that is slightly less accurate the one presented earlier: 𝜞(𝒙) = 𝟐𝝅 𝒙 𝒙 𝒆 𝒙 (𝟏+ 𝟏 𝟏𝟓 𝒙 𝟐 ) (𝟓/𝟒)𝒙 The first Nemes equation can be extended by adding more continuous fractions to enhance the accuracy. I found a total of five equations by Nemes!
34
Gamma Function Approximation (Cont.)
Nemes %Err Sprouge 1.5 1.5E-13 2.5 3.75 8.29E-05 -2.2E-13 4.5 3.39E-05 2.75E-13 5.5 1.26E-05 6.5 5.51E-06 -3.2E-13 7.5 2.71E-06 -4.7E-12 8.5 1.45E-06 1.72E-12 9.5 8.35E-07 -5.2E-12 9.75 7.34E-07 -1.9E-12
35
Error Function Approximation
Standard Gaussian CDF defined using: 𝑺𝒕𝒅 𝑮𝒂𝒖𝒔𝒔𝒊𝒂𝒏 𝑪𝑫𝑭= 𝟏 𝟐 𝟏+ 𝒆𝒓𝒇 𝒙 𝟐 Standard inverse Gaussian CDF defined using: 𝒙= 𝟐 𝒆𝒓𝒇 −𝟏 (𝟐∗𝑪𝑫𝑭−𝟏)
36
Error Function Approximation
Approximation using: 𝒆𝒓𝒇 𝒙 =𝒔𝒈𝒏 𝒙 𝟏−𝐞𝐱𝐩(− 𝒙 𝟐 ( 𝟒 𝝅 +𝒂 𝒙 𝟐 𝟏+𝒂 𝒙 𝟐 )) Where a = with error less than for all x. Using a=0.147 reduces the maximum error to about
37
Inverse Error Function Approximation
Approximation using: 𝒆𝒓𝒇 −𝟏 (𝒙)=𝒔𝒈𝒏 𝒙 ( 𝒂 𝟏 + 𝒂 𝟐 ) 𝟐 − 𝒂 𝟑 −( 𝒂 𝟏 + 𝒂 𝟐 ) Where: 𝒂 𝟏 =2/aπ, 𝒂 𝟐 = 𝒍𝒏 𝟏− 𝒙 𝟐 /𝟐, 𝒂 𝟑 = 𝒍𝒏 𝟏− 𝒙 𝟐 /𝒂, and a is the same value used to approximate erf(x).
38
Normal CDF Approximation
Approximation according to Roger Hart using: 𝟏 𝟐𝝅 𝒙 ∞ 𝒆 − 𝒕 𝟐 /𝟐 𝒅𝒕 = 𝒆 − 𝒙 𝟐 /𝟐 𝟐𝝅 (𝒙+𝟎.𝟖 𝒆 −𝟎.𝟒𝒙 ) 𝒇𝒐𝒓 𝒙≥𝟎 Has good accuracy. Offers complementary values compared the HP Prime NORMALD_CDF.
39
Inverse Student-t CDF Approximation
Approximation is: Inverse CDF Student-t = exp(A + B/df + C/df^2) Where A, B, and C depended on confidence level. Signif. Level A B C 0.200 0.150 0.100 0.050 0.025
40
General Comment on Function Approximations
There are many approximations for Bessel, gamma(x), erf(x), inverse erf(x), and just about any function using polynomial approximations. Some approximations are regular polynomials of the functions’ argument x. Other approximations use transformation variables such as t=1/(a+bx) or t=1/(a+b|x|) Some approximations use Pade polynomials.
41
Compact Composite Simpson’s Rule
Simpson’s rule uses the following equation: 𝒂 𝒃 𝒇 𝒙 𝒅𝒙= 𝒉 𝟑 (𝒇 𝒂 +𝟒𝒇 𝒂+𝒉 +𝟐𝒇 𝒂+𝟐𝒉 …+𝟐𝒇 𝒃−𝟐𝒉 +𝟒𝒇 𝒃−𝒉 + 𝒇(𝒃)) 𝒂 𝒃 𝒇 𝒙 𝒅𝒙= 𝒉 𝟑 (𝒇 𝒂 +𝟒𝒇 𝒂+𝒉 +𝟐𝒇 𝒂+𝟐𝒉 …+𝟐𝒇 𝒃−𝟐𝒉 +𝟒𝒇 𝒃−𝒉 + 𝒇(𝒃)) Typical implementations use two loops—one for odd terms and the other for even terms. Proposal to use one loop.
42
Compact Composite Simpson’s Rule (Cont.)
Given f(x), range (a, b) and increment h. Sum = f(a) + f(b) C = 4, a = a + h While a<b Sum = Sum + C * f(a) a = a + h C = 6 - C Calculate area as h/3*Sum
43
Compact Composite Simpson’s Rule (Cont.)
Given f(x), range (a, b) and increment h. Sum = f(a) – f(b) a = a + h While a<=b Sum = Sum + 4 * f(a) + 2 * f(a + h) a = a + 2h Calculate area as h/3*Sum
44
Compact Composite Simpson’s 3/8 Rule
Simpson’s 3/8 rule uses the following equation: 𝒂 𝒃 𝒇 𝒙 𝒅𝒙= 𝟑𝒉 𝟖 (𝒇 𝒂 +𝟑𝒇 𝒂+𝒉 +𝟑𝒇 𝒂+𝟐𝒉 +𝟐𝒇 𝒂+𝟑𝒉 +…+𝟑𝒇 𝒃−𝒉 + 𝒇(𝒃)) 𝒂 𝒃 𝒇 𝒙 𝒅𝒙= 𝟑𝒉 𝟖 (𝒇 𝒂 +𝟑𝒇 𝒂+𝒉 +𝟑𝒇 𝒂+𝟐𝒉 +𝟐𝒇 𝒂+𝟑𝒉 +…+𝟑𝒇 𝒃−𝒉 + 𝒇(𝒃)) Typical implementations use two (or three?) loops—one for terms multiplied by 3 and the other for terms multiplied by 2. Proposal to use one loop.
45
Compact Composite Simpson’s 3/8 Rule (Cont.)
Given f(x), range (a, b) and number of points n Sum = f(a) + f(b) h = (b - a) / n For i=1 to n-1 C = 2 + sign(i mod 3) a = a + h Sum = Sum + C * f(a) Calculate area as 3*h/8*Sum
46
Compact Composite Numerical Integration
For other composite integration rules, the trick is to use a single loop and change the value multiplied by f(x) for each term. For more complex patterns of coefficient values you can use an array of coefficients and an index that cycles using a mod operator.
47
Gauss- Chebyshev Quadrature
Basic form integrates a function between -1 and 1 using n nodes. −𝟏 𝟏 𝒇(𝒙) 𝟏− 𝒙 𝟐 𝒅𝒙= 𝒊=𝟏 𝒏 𝒘 𝒊 𝒇( 𝒙 𝒊 ) Can be easily used for finite integration range of [a, b]. Weights are fixed at π/n. The f(x) values are evaluated at shifted cosine values
48
Gaussian Chebyshev Quadrature (Cont.)
Equation is: 𝑨𝒓𝒆𝒂 𝒂,𝒃,𝒏 = 𝒃−𝒂 𝟐 𝒊=𝟏 𝒏 𝒘 𝒊 𝒇 𝒙 𝒊 ∗ 𝟏− 𝒚 𝒊 𝟐 𝒚 𝒊 =𝐜𝐨𝐬[ 𝟐𝒊−𝟏 𝝅 𝟐𝒏 ] 𝒙 𝒊 = 𝒃−𝒂 𝟐 𝒚 𝒊 + 𝒃+𝒂 𝟐 𝒘 𝒊 = 𝝅 𝒏
49
Gaussian Chebyshev Quadrature (Cont.)
10 REM GAUSS-CHEBYSHEV QUADRATURE 20 INPUT "A? ";A 30 INPUT "B? ";B 40 INPUT "N? ";N 50 T1=(B-A)/2 60 T2=(B+A)/2 70 T3=PI/2/N 80 S=0 90 RADIANS
50
Gaussian Chebyshev Quadrature (Cont.)
100 FOR I=1 TO N 110 Y=COS((2*I-1)*T3) 120 X=T1*Y+T2 130 GOSUB 1000 140 S=S+F*SQR(1-Y*Y) 150 NEXT I 160 R=2*T1*T3*S 170 DISP PAUSE 180 GOSUB 2000
51
Gaussian Chebyshev Quadrature (Cont.)
190 END 1000 REM F(X) 1010 F=1/X 1020 RETURN 2000 REM COMPARE WITH ACTUAL INTEGRAL 2010 Y=LOG(B/A) 2020 DISP "EXACT PAUSE 2030 DISP "% ERR=";100*(Y-R)/Y 2040 RETURN
52
Thank You!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.