Accuracy Our society depends on software. This may be an obvious statement. Software bugs cost the U.S. economy 60 billion dollars each year. It is stated that a third of that cost could be eliminated by improved testing. Bugs can cause accidents, also in science, although in astronomy people usually do not end up in a hospital. In this lecture we focus on floating point numbers. It is important to realize that these numbers are represented by a limited number of bits and therefore are limited to a certain precision. First we discuss three sources of error: –Rounding, –Cancellation and –Recursion.
Rounding Calculations use a fixed number of significant digits. After each operation the result usually has to be rounded. The error is less than (or equal) half a unit represented by the last bit of the internal representation. In loops these errors can propagate and grow. def step( a, b, n ): """ Function to calculate intermediate steps in an interval """ h = (b-a) / n x = a print "%.18f" % (x) while (x < b): x = x + h print "%.18f" % (x) step (1.0, 2.0, 3)
Rounding Enhancing the accuracy –Replacing the condition in the while statement with a better one. def step( a, b, n ): """ Function to calculate intermediate steps in an interval """ eps = 1.0e-8 h = (b-a) / n x = a print "%.18f" % (x) while (abs(x – b) > eps): x = x + h print "%.18f" % (x) step(1.0, 2.0, 3)
Rounding Enhancing the accuracy –Replacing the while with a for loop using an integer. def step( a, b, n ): """ Function to calculate intermediate steps in an interval """ eps = 1.0e-8 h = (b-a) / n x = a print "%.18f" % (x) for i in range(0,n+1): x = x + h print "%.18f" % (x) step(1.0, 2.0, 3)
Cancellation Cancellation occurs from the subtraction of two almost equal numbers. If you subtract two numbers with approximate equal values then the errors are also comparable and in the subtraction we get a small value with a relatively big error (which can be demonstrated with simple error analysis). This effect is demonstrated when you try to calculate the roots of the equation: This equation has two analytical expressions for finding the roots: With a=1.0e-5, b = 1.0e3 and c = 1.0e3
Cancellation Consider the following code import numpy def sqrt_single(x): X = x.astype('f') # or use: X = numpy.cast['f'](x) return numpy.sqrt(X) v = numpy.array([1.0e-5, 1.0e3, 1.0e3], 'f') print "SINGLE precision type: ", v.dtype a = v; b = v; c = v sq = sqrt_single(b*b-4.0*a*c) xa1 = (-b + sq)/(2.0*a) xa2 = (-b - sq)/(2.0*a) print "\nx1,x2: ", xa1,xa2 print "We expect: x1*x2-c/a=0, but result is:", xa1*xa2-c/a print "x1 from c/(a*x2)=", c/(a*xa2) xb1 = (-2.0*c) / (b + sq) xb2 = (-2.0*c) / (b - sq) print "\nx1,x2: ", xb1,xb2 print "We expect x1*x2-c/a=0, but result is:", xb1*xb2-c/a print "x2 from c/(a*x1)", c/(a*xb1) print "\nsqrt should be , but result is: %12f "% sq9
Cancellation And the result is This seems strange. Two analytical equivalent formulas do not generate the same solution for x1 and x2. This is the effect of cancellation. Note that the value of 4*|ac| is small compared to b^2. The square root results in a value near b and the error in the square root will dominate. The cancellation effects occur when we subtract b and a number near b. The correct roots for the single precision approach are: –x1 = -1 –x2 = -1.0e8 SINGLE precision type: float32 x1,x2: We expect: x1*x2-c/a=0, but result is: x1 from c/(a*x2)= -1.0 x1,x2: We expect x1*x2-c/a=0, but result is: x2 from c/(a*x1) sqrt should be , but result is:
Recursion Many scientific calculations use a new entity based on a previous one. In such iterations the errors can accumulate and destroy your computation. In another task we will discuss Euler's method to solve a first order differential equation y'=f(x,y). Values for y are calculated with y n+1 = y n + h.f(x,y). For small h the method is stable but the error increases.
Recursion, The problem: to find a numerical approximation to y(t) where Typically we use a fixed step size, i.e.,h n = h = constant
Recursion def f(t,y): value = y+t return (value) def euler(t0,y0,h,tmax): t=t0; y=y0; td=[t0]; yd=[y0]; while t
Real life examples Software causing severe problems –Patriot missile. Time conversion of integer to float with error therefor missing target. Tested only for < 100 hours. –Truncation of amounts in stock market transactions and currency conversions. –Exploding Ariane 5 rocket in 1996 due to limited size of integer memory storage. –Illegal input not handled correctly: USS Yorktown three hours no propulsion due to database overflow.
Random numbers Python’s random number generator import random for i in range(5): # random float: 0.0 <= number < 1.0 print random.random(), # random float: 10 <= number < 20 print random.uniform(10, 20), # random integer: 100 <= number <= 1000 print random.randint(100, 1000), # random integer: even numbers in 100 <= number < 1000 print random.randrange(100, 1000, 2) Warning: The random number generators provided in the standard library are pseudo-random generators. While this might be good enough for many purposes, including simulations, numerical analysis, and games, but it’s definitely not good enough for cryptographic use.
Random numbers numpy’s random number generator randrand(d0, d1,..., dn)Random values in a given shape. randnrandn(d0, d1,..., dn) Return a sample (or samples) from the “standard normal” distribution. randintrandint(low[, high, size]) Return random integers from low (inclusive) to high (exclusive). random_integersrandom_integers(low[, high, size])Return random integers between low and high, inclusive. random_samplerandom_sample([size])Return random floats in the half-open interval [0.0, 1.0). randomrandom([size])Return random floats in the half-open interval [0.0, 1.0). ranfranf([size])Return random floats in the half-open interval [0.0, 1.0). samplesample([size])Return random floats in the half-open interval [0.0, 1.0). choicechoice(a[, size, replace, p])Generates a random sample from a given 1-D array bytesbytes(length)Return random bytes.
Random numbers numpy’s random number generator, a few examples betabeta(a, b[, size])The Beta distribution over [0, 1]. binomialbinomial(n, p[, size])Draw samples from a binomial distribution. chisquarechisquare(df[, size])Draw samples from a chi-square distribution. exponentialexponential([scale, size])Exponential distribution. gammagamma(shape[, scale, size])Draw samples from a Gamma distribution. normalnormal([loc, scale, size])Draw random samples from a normal (Gaussian) distribution. poissonpoisson([lam, size])Draw samples from a Poisson distribution. powerpower(a[, size])Draws samples in [0, 1] from a power distribution with positive exponent a - 1. standard_exponentialstandard_exponential([size])Draw samples from the standard exponential distribution. standard_gammastandard_gamma(shape[, size])Draw samples from a Standard Gamma distribution. standard_normalstandard_normal([size])Returns samples from a Standard Normal distribution (mean=0, stdev=1). standard_tstandard_t(df[, size])Standard Student’s t distribution with df degrees of freedom. uniformuniform([low, high, size])Draw samples from a uniform distribution. The numpy.random library contains a few extra probability distributions commonly used in scientific research, as well as a couple of convenience functions for generating arrays of random data.
Random Numpy random number generator Pseudo randomness >>> import numpy.random as nr >>> nr.rand(10) array([ , , , , , , , , , ]) >>> import numpy.random as nr >>> nr.rand(3,2) array([[ , ], [ , ], [ , ]]) >>> nr.seed() >>> nr.rand(3) array([ , , ]) >>> nr.seed() >>> nr.rand(3) array([ , , ])
The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. >>> nr.normal(10.0,2.0,10) array([ , , , , , , , , , ]) Random generators
In probability theory and statistics, the Poisson distribution is a discrete probability distribution. This random number counts the number of successes in n independent experiments (where the probability of success in each experiment is p) in the limit as n -> infinity and p -> 0 gets very small such that lambda = np >= 0 is a constant. >>> samples = nr.poisson(10,100) >>> samples array([ 9, 17, 11, 12, 8, 5, 13, 8, 10, 13, 5, 8, 11, 10, 10, 8, 7, 5, 9, 10, 8, 12, 16, 4, 8, 8, 15, 4, 9, 9, 8, 12, 17, 9, 10, 8, 12, 16, 8, 8, 12, 15, 16, 12, 15, 10, 17, 9, 8, 10, 10, 13, 12, 8, 11, 9, 14, 14, 11, 11, 9, 14, 4, 4, 12, 18, 8, 11, 16, 11, 11, 5, 6, 13, 13, 11, 7, 8, 11, 11, 14, 9, 6, 11, 13, 10, 8, 10, 10, 7, 11, 7, 13, 11, 16, 9, 9, 13, 11, 7])