MATH 522 2019 Course Notes
Prof. Zachariah B. Etienne
MATH 522 2019 Course Notes
MATH 522 Additional Online Reading List
Prof. Zachariah B. Etienne
Note 1: All links to Wikipedia reference versions of the article
that have been vetted by the instructor, and may be considered trustworthy.
Note 2: This reading list should be considered as a
complement to the lecture notes as well as the suggested text for
this course: (Numerical Recipes, second edition or
higher)
Note 3: For many terms in Numerical Analysis, multiple definitions
may be found. In such a case, only the lecture notes' definition
will be accepted.
Preliminaries:
All material in MATH 521 and MATH 522 builds on basic undergraduate
mathematics, and when you start this course, I assume you are
already well-versed in undergraduate mathematics.
If you need to brush up, here are some resources that may
help:
Scientific Notation and Significant Figures
Review
of scientific notation and significant figures
Think you're ready? Take these online quizzes:
Quiz on scientific notation #1
Quiz on scientific notation #2
Quiz on scientific notation and significant figures
Basic Algebra: Solving Nonlinear Equations and Inequalities
Involving Logarithms
Involving
Exponentials and Logarithms
Quadratic
Equations and Quadratic Formula
Quadratic inequalities
A worked out example of both linear and nonlinear (quartic)
inequalities:
Example
problem, determining where strict diagonal dominance criterion is
satisfied
Calculus I: Differentiation
Chain
Rule Practice Quiz
Finding
minima and maxima of functions
Calculus II: Integration
Integration
by variable substitution
Integration
by parts
Linear Algebra
Computing
Determinants
Properties
of Determinants
Eigenvalues
and Eigenvectors
Eigenvalue/Eigenvector
Worksheet, with solutions
Determining the Scale of a Problem
Fermi Problems/Back-of-the-Envelope: Roughly Estimating the Solution
Rule
of 72: Useful for estimating exponential growth.
Guide
to Big-O Notation
Big-O
Notation: Basics
Roundoff Error, Loss of Significance
Chapter 1 of Numerical Recipes
Floating
Point Arithmetic Primer
How
double precision numbers are stored on the computer, and why we can
expect roughly 15-16 decimal digits of precision
Notes on
Loss of Significance
Notes on rewriting expressions to avoid catastrophic cancellation
Tips for determining how many digits of
significance you can expect from a double-precision calculation (note
that this
neglects guard
digits):
- Evaluating the expression according to proper order-of-operation,
check for arithmetic steps that go out of bounds for double precision
arithmetic. (Note that the smallest nonzero number is roughly plus or minus
1e-308; largest non-infinite number is roughly plus or minus
1e+308). If this happens, evaluate to zero or infinity as appropriate. You might still
retain some digits of significance.
- Check for catastrophic cancellation.
- Check for numbers that are exactly representable by double
precision. If these exist, they are
known to an infinite number of significant digits. If not, they are
generally known to only 15--16 significant digits.
- Dividing or subtracting two numbers that are identical to all
significant digits will yield exactly one or zero, respectively.
Please contact the instructor if you would like to contribute further tips.
Example problems:
When the following expressions are evaluated by the computer,
to how many significant decimal digits will the numerical result
agree with the exact result? Your answer to this part will consist of
a single integer (infinity is an acceptable answer), and the numerical
answer will be accepted so long as it is correct to within 1 decimal
digit.
For the purpose of this problem, apply IEEE 754 standard-compliant
double precision arithmetic, assuming all given
integers represented in integer or floating point format between
-253 and 253 (i.e., integers between ≅
-9×1015
and ≅ 9×1015
inclusive) are exact (for
example, 2.01e2 is exact), as well as powers of
1/2. Otherwise, assume that in double precision the number
is only known to 16 significant digits, and that machine epsilon is
4e-16. We use computer scientific notation, such that, e.g.,
5.63e22 = 5.63\times 10^{22}$.
- 16 - 2.355e-10
- 1e52 + 1e-10
- 1e230 / (1e-220)
- 1024 - 4.2e-183
- 2 / (1e120 * 1e120 * 1e120)
- (3.24e-35 - 1.2e-1) / (2.54e-256)
- (3e-250 + 3)
- (3e-250 + 3) / (3e250)
Solving Linear Sets of Equations on the Computer
Chapter 2 of Numerical Recipes
Review:
Gaussian Elimination
LU
Decomposition, by example
LU
Decomposition: Computational complexity, basic properties, pivoting
to reduce roundoff error
Why do we favor the LU Decomposition over Gaussian Elimination?
Roundoff
error analysis of LU Decomposition
Jacobi
Method, derivation
Gauss-Seidel
Method, Successive Over-Relaxation (SOR), derivation
Positive
Definite Matrices
Reducible/Irreducible Matrices
Spectral
Radius
Rigorous
derivation of Jacobi and Gauss-Seidel, with convergence criteria
Example
problem, determining where strict diagonal dominance criterion is
satisfied
Detailed
lecture notes on Numerical Linear Algebra (see, e.g., Chapter 5)
Function Approximation (Approximation Theory)
Taylor
Series Review
Series
Convergence: Ratio Test
Radius of
Convergence
Using
Taylor Series to compute pi (Gregory-Leibniz series; converges
extremely slowly)
Fourier Series/Transforms
Chapter 12 of Numerical Recipes
Even
and odd functions
Orthogonality of sines and cosines
Fourier
Series; computing Fourier Series coefficients
Discussion
on why Fourier Series/Transforms are so important
Fourier
Convergence Theorem (to what value does Fourier Series converge?)
Periodic
Functions / Fourier Convergence Theorem (to what value does Fourier
Series converge?)
Gibbs
Phenomenon
Discrete-Time
Fourier Transform (i.e., Fourier Transform of uniformly-sampled function)
Cooley-Tukey
Fast Fourier Transform (FFT; see "radix-2" case)
Polynomial Interpolation and Extrapolation
Chapter 3 of Numerical Recipes
Polynomial/Lagrange
Interpolation/Extrapolation
Numerical Integration
Chapter 4 of Numerical Recipes
Trapezoidal
Rule
Trapezoidal
Rule Example
Simpson's
Rule, with derivation
Simpson's
Rule Examples
Richardson
Extrapolation
Romberg Integration
Root-Finding Algorithms
Chapter 9 of Numerical Recipes
Bisection
Method: Pseudocode, Worked Example, Convergence Analysis
Secant
Method: Basic Method, Derivation, Convergence, Comparison with False
Position Method
False
Position Method: Basic Algorithm, Convergence Analysis, Extensions
(Illinois algorithm), Pseudocode
Newton-Raphson
Method, with practical considerations
Worked
Example of Newton-Raphson Method
Fixed-Point
Iteration Root-Finding Method
Fixed-Point Iteration Examples
Optimization: Finding Minima and Maxima of Functions
Chapter 10 of Numerical Recipes
Golden
Section Search: Motivations, Derivation
Golden
Section Search: Derivation, Pseudocode
Golden
Section Search: Alternative Derivation
Brent's
Method for Minimization: Basic Description
Nedler-Mead
Method: Description, Animations
Powell's
Method: Description, Examples, Mathematica code
Evolutionary Algorithms: Basic Description, Many Links to Explore
Linear
Programming: Very Basic Introduction with Examples
Linear Programming:
Proper Introduction
Method
of Steepest/Gradient Descent
Numerically Solving Ordinary Differential Equations (ODEs)
Chapter 16 of Numerical Recipes
Euler's
Method, with Examples
Derivation
of RK2 (Runge Kutta, Second Order)
Check for broken links via this link: W3C Link Checker