Overview Tables
Tables on Automatic Differentiation
- Overview of Explicit Automatic Differentiation Rules for both scalar and tensor operations (forward- and reverse-mode)
- Overview of Implicit Automatic Differentiation Rules like root-finding, (non-)linear system solving, ODE and PDE integration or differentiable optimization (forward- and reverse-mode)
Below tables are experimental and not yet complete.
- Overview of 1D Convolution Automatic Differentiation Rules with different padding and stride options (only reverse-mode, the forward-mode is straightforward based on the matrix-vector multiplication rule)
- Overview of Convolution Automatic Differentiation Rules in various deep learning frameworks (only reverse-mode, the forward-mode is straightforward based on the matrix-vector multiplication rule): this differs from the above table in that:
- some frameworks (like PyTorch, TensorFlow or JAX) use cross-correlation instead of convolution
- real-world convolutions/cross-correlations often map multiple channels to multiple channels and do so for multiple samples (one batch) at once (that requires additional reordering of tensor axes for the reverse passes)
- the tensor layout convention (of batch, spatial and channel axes) can vary between frameworks
Tables on Physics-based deep learning
- Overview of learning setups for neural predictors like the classical t-step supervised, but also more that involve differentiable physics simulators
- Overview of corrector configurations for neural correctors that are trained to augment coarse solvers
Tables on Fast Fourier Transform (FFT)
- Relation of functions and their Fourier coefficients in 1d: shows, for instance
- how the coefficients are scaled
- that a real positive cosine has both positive real coefficients at positive and negative wavenumbers
- that a real positive sine has a negative imaginary coefficient at the positive wavenumber and a positive imaginary coefficient at the negative wavenumber
- how imaginary sines and cosines are represented
- how the Nyquist mode glitches at an even number of sampling points
- how higher modes are aliased
- Similar table for 2d: here, we additionally have:
- how the indexing scheme for
np.meshgrid
affects the order of the coefficients - how the scaling of the coefficients change (N scaling for zero mode, N/2 scaling for modes where one axis is the zero mode, N/4 scaling for all other modes (except the Nyquist mode at even numbers of sampling points))
- how the Nyquist mode glitches, especially if one axis has an even number of sampling points and the other an odd number
- how the indexing scheme for