Hopefully someone else might also find these notes useful. Let me know if you find any mistakes or have any comments.
- Yet another introduction to backpropagation. [pdf]
- Gibbs sampling for fitting finite and infinite Gaussian mixture models. [pdf, code]
- Vector and matrix calculus. [pdf]
- Learning from unlabelled speech, with and without visual cues. Ohio State University, 2017. [slides]
- Learning from unlabelled speech, with and without visual cues. University of Maryland, CLIP Colloquium Speaker, 2017.
- Unsupervised neural and Bayesian models for zero-resource speech processing. MIT, Computer Science and Artificial Intelligence Laboratory, 2016. [slides]
- Unsupervised speech processing using acoustic word embeddings. Workshop on Machine Learning in Speech and Language Processing, Spotlight Speaker, 2016. [slides]
- recipe_vision_speech_flickr: A complete recipe for our visually grounded keyword prediction model described in Interspeech’17.
- segmentalist: Unsupervised word segmentation and clustering of speech in Python. We use it in CSL’17 and apply it to the Zero Resource Speech Challenge 2015 data (English and Xitsonga), as shown in this complete recipe.
- couscous: Theano code for training Siamese CNNs. We used it in ICASSP’16 for training acoustic word embeddings from speech, as shown in this complete recipe.
- speech_correspondence: Pylearn2 implementation of the correspondence autoencoder, as described in ICASSP’15.
- bayes_gmm: Bayesian Gaussian mixture models in Python, as described in SLT’14.