The Good Stuff (17/8/20 - 23/8/20)

If anybody comes across an error, please feel free to correct me!

Maths

  • Out of all the subjects, I think Brilliant excels most at Number Theory (although Stanford’s Cryptography website comes close) - I tried my hand at some problems using Euler’s totient theorem, the rather obvious rational root theorem and the Extended Euclidean Algorithm (super important with Bezout’s theorem for RMO) with the Chinese Remainder Theorem (which, on the surface, looks extremely fancy, but is really just a uniqueness theorem)
  • Mobius strips are the clearest examples of line bundles - but what purpose do they serve?
  • The Mobius inversion formula looks promising, but I’m yet to find some good problems utilising it.
  • I read a lovely high-level introduction to monstrous moonshine and the sporadic groups.
  • I also found a great Maths SE answer on when to use Ch, LU(P), QR and SVD (in short, there’s a tradeoff between numerical stability and speed; also QR is generally used for orthogonalisation)
  • Sylow’s theorems kinda remind me of the twin prime conjecture, I don’t know why.
  • Laplace transforms are just continuous complex series, if you think about it.

Linear Algebra

  • Silly me! I thought the only way to diagonalise a matrix was by pre-post multiplying with the transformation matrix - instead, just put the eigenvalues on the diagonal! (in a computational context where I’d already computed the eigenvalues)
  • Now I see why Gaussian elimination is a killer - triangular matrices are equally simple for determinants and eigenvalues.
  • I’ll need a better source than Wikipedia on the Jacobi eigenvalue algorithm, since it’s incorrect.
  • The spectral theorem was a good induction exercise.
  • Obviously separable matrices have rank 1 (I hate that terminology)
  • OK, the Jacobi eigenvalue algorithm involves using Givens’ rotations to zero out entries, fairly straightforward

Physics

  • I completely understood the Hellman-Feynman theorem now (although the expectation values of 1/r^n are too boring).
  • I have to investigate into ‘instantaneous multipoles’, in the context of dispersion forces.
  • minutephysics’ video on time dilation and length contraction is arguably the clearest there is
  • I solved for the equations of motion for a projectile with air resistance (fairly simple - but always use Newtonian formalism!).
  • Wave function collapsing is also clear now (Griffith’s book is quite poor in this regard) - what is decoherence?
  • Goldstone bosons arising due to spontaneous symmetry breaking is so cool (in phi-fourth theory and harmonic oscillators, they make the constant squared so that it’s positive!)
  • Now it’s fairly clear why bosons ‘transform’ under the adjoint representation - that’s how the gauge field changes!
  • I’d forgotten something basic about electric fields - so I decided to go through Griffiths Chapters 1-3 again (as Andrew Dotson said, it deserves to be a national treasure. It’s a shame that the same cannot be said for his QM book).
  • Scattering looks a bit more interesting now because of HEP in P&S - I’ll go through Rutherford scattering first in Griffiths.
  • Doppler broadening of spectral lines was interesting (and something which I’d never thought of before)
  • I’d love to see spherical harmonics paired with FFT in some computational problem; a link between geometric topology and Feynman diagram symmetry would be awesome too
  • Gamma matrices won’t form a basis for $\mathbb{C}_{4\times4}$ (just do some dimensional analysis)

Chemistry

  • Its been confirmed - electrons do occupy a superposition of all orbitals (I knew this, but had a niggling doubt regarding the formalism). The shielding effect is interesting - I wonder why there isn’t more literature on it?
  • Maybe I should research a bit about ligand field theory (it doesn’t feel like a substantial improvement, though).
  • My hypothesis is that $nd^5$ and $nd^{10}$ are more stable because of Hund’s first and third rules respectively. I found a great explanation on why $\text{S=S}$ is stronger than $\text{O=O}$, and why antibonding orbitals are more antibonding than bonding orbitals are bonding.
  • I tried learning about DFT, but I decided that a solid grounding in practical Hartree-Fock SCF is in order first, so I discovered a good PPT by emich.edu and ESQC. IvoFilot.nl had an awesome practical post on SCF linear algebra methods!
  • I’m wondering how I can separate the overlap and 4-point matrices (in the spirit of a Gaussian filter, but obviously not the same method)
  • $d_{z^2}$- is actually an abbreviation for $d_{3z^2 - r^2}$ (sneaky chemists) - that’s why you only need 5 d-orbitals
  • I made a basic CGTO basis set program, with assistance from my second favourite book of the month, Szabo and Ostlund. Everything about Hartree-Fock is explained in detail - they don’t skim over anything, they have appendices on the computational aspect, they have EXAMPLES, even a friendly tone - and, of course, it was under my nose the whole time I was trying to find a decent source on Hartree Fock (I had downloaded it last year).
  • I repeated my least-squares basis function for GTOs - turns out its an optimisation problem called ‘basis pursuit’. So, ab initio, I was able to compute the ground state energy for $\text{HeH}^+$ ($\text{H}_2^+$ can be deduced by symmetry arguments - it didn’t deserve a full-blown matrix solve)
  • I wasn’t aware of ‘diffuse functions’, with small $\zeta$: I’ll probably never get round to using them, unfortunately
  • I managed to understand the theory behind DFT - perhaps a good basis set is all you need, given the equations. The ABC of DFT is a decent introduction, but it is surprisingly difficult to find practical DFT code the same way you can for vanilla HF. I think compphys.go.ro’s GitHub page might supplement his already excellent articles with some concrete code.

    Computer Science

  • Butcher tableau for RK methods are a neat representation - is there a way to find whether a differential equation is stiff other than trying explicit methods on it?
  • Level-one cellular automata are cool (they’re on the Cambridge train station!), so I’ll try making a parallel implementation of all of them.
  • What are the advantages of quad vs trapz vs simpson? I was thinking of numerical integration for my HFSCF program, but I’m leaning towards analytical solutions for GTOs (same problem with the kinetic operator). I might use this as an opportunity to use Julia for a practical project because of the way it handles special matrices, although I’m not crazy about the VSCode support.
  • All right, using discretisation, I managed to solve for the hydrogen energies to about 99.8% accuracy (its not as satisfying as HF, nor does it generalise well, but it’s good practice for finite element methods).
  • Do neural ODE solvers have an edge in any way here?
  • Also, I guess I’ll be using Householder rotations for diagonalisation, unless Julia also handles that.
  • I read a bit about Gaussian integral evaluation techniques - I should look into FFT as a general method for such cases.
  • I’d really like to find out more about grid-based methods for ODE-based problems - hopefully some FFTs to the reciprocal space as well.
  • I found a guide called ‘FORM for pedestrians’ - a comprehensive intro, even containing Dirac-algebra trace formalism! (I should try and clone it as my first true CMD program). It’ll definitely be an excellent companion for any HEP course
  • I also need to find out more about graph embeddings

    Machine Learning

  • I’m not sure how important sampling is faster, I’ll have to consult a better source. I read a bit about CMA-ES, hopefully that helps with the rest of EAs
  • Lilian Weng has one of the clearest entries on VAEs of all shapes and sizes - I learnt about the VQ-VAE this time.
  • Back to Brian Keng - Expectation Maximization Algorithm, Regularization and Linear Regression Probabalistically, Marginal Likelihood in VAEs, Autoregressive Autoencoders (see: MADE) - even Hypothesis Testing, where he made a crucial distinction between ‘trapping’ with probability, and probabalistically trapping.
  • I saw why the denominator in the usual Bayesian formula is called the ‘evidence’ - I suppose it sort of makes sense, although it feels like abuse of terminology
  • There was a word2vec paper (the original?) that I never got round to reading :(