View on GitHub


Yet another introduction to natural language processing

Lecture notes

  1. Introduction to natural language processing
  2. Text normalization, units and edit distance
  3. Language modelling with N-grams
  4. Entropy and perplexity (advanced)
  5. Hidden Markov models
  6. Expectation maximization (advanced)
  7. Word embeddings
  8. An introduction to neural networks
  9. Recurrent neural networks
  10. Encoder-decoder models and attention


With permission, I have used content from the NLP courses taught by Jan Buys (University of Cape Town) and Sharon Goldwater (University of Edinburgh).


Herman Kamper, 2022–2023.
This work is released under a Creative Commons Attribution-ShareAlike license (CC BY-SA 4.0).