NLP course at ITMO University (2019)

NLP course, ITMO University, Spring 2019

Slides

  1. Introduction + NLP for information retrival
  2. Text representations, model evaluation and text classification
  3. String distances, regular expressions
  4. Language modeling
  5. Markov chains, information theory
  6. Vector semantics
  7. Clustering
  8. Topic modeling
  9. Duplicate objects detection
  10. Sequence tagging
  11. Syntax: PSG and others
  12. Syntax: dependency grammar
  13. Valentin Malykh:  guest lecture: Convolutional NNs for NLP
  14. Valentin Malykh:  guest lecture: Attention mechanism (+more) for NLP

Written exam questions

  1. Zipf's Law, its importance for NLP. Language processing in information retrieval: lemmatization, stemming, Boolean search, inverted indices, execution of Boolean queries on them, skip-lists.
  2. Language processing in information retrieval: vector space model, cosine distance, TF-IDF. Common ways of representing texts for machine learning tasks.
  3. String distances and the algorithms for their computation: the Hamming distance, the Jaro-Winkler distance, the Levenshtein distance, the longest common subsequence, the Jaccard distance for character N-grams. Indices for typos detection/correction in words.
  4. Edit distances (definitions only). Regular expressions: basic constructions, recommendations for use.
  5. Markov chains. Ergodic theorem. PageRank and Markov chains. Direct applications in the text analysis.
  6. Elements of information theory: self-information, bit, pointwise mutual information, Kullback-Leibler divergence, Shannon entropy, its interpretations. Cross-entropy. Example of an application: collocations extraction.
  7. Language modeling. N-gram models. Perplexity. The reasons for doing smoothing. Additive (Laplace) smoothing. Interpolation and backoff. The ideas on which the Kneser-Ney smoothing is based.
  8. Vector semantics: term-document matrices, term-context matrices, HAL. SVD, LSA, NMF. Methods for quality evaluation of vector semantics models.
  9. Vector semantics: what is word2vec (the core principles of the SGNS algorithm and its relationship with matrix factorization), word2vec as a neural network. Methods for quality evaluation of vector semantics models.
  10. Clustering: types of clustering algorithms. KMeans, agglomerative and divisive clustering (+ ways of estimating the distances between clusters), DBSCAN. Limitations and areas of applicability of all algorithms. Methods clustering quality evaluation, the shortcomings of each.
  11. Duplicates search: statement of the problem, description of the MinHash algorithm. Probability of hashes matching is equal to Jaccard similarity (with proof).
  12. Topic modeling. LSA, pLSA, LDA, ARTM. Advantages and disadvantages of each method. Topic modeling quality evaluation (perplexity, coherence and methods with experts involved).
  13. Classification. Binary classification quality evaluation. Metric classification methods. Logical methods of classification. Linear classification methods.
  14. Quality evaluation of machine learning models (why divide the data set into three parts). Classification. Multi-class classification quality evaluation. Naive Bayes Classifier. Ensembles of models of machine learning.
  15. Sequence tagging. PoS tagging. Named entity recognition. Hidden Markov models. Estimation of the probability of a sequence of states. Estimation of the probability of a sequence of observations. Quality evaluation.
  16. Sequence tagging. PoS tagging. Named entity recognition. Hidden Markov models. Decoding of the most probable sequence of states (Veterbi algorithm without proof). Quality evaluation.
  17. Sequence tagging. PoS tagging. Named entity recognition. Structured perceptron. Structured perceptron training. Sequente tagging quality evaluation.
  18. Syntax parsing. Syntax description approaches. Phrase structure grammar: the principles. Formal grammar. Chomsky Normal Form. Cocke-Kasami-Younger algorithm, its complexity. Parsing quality evaluation.
  19. Syntax parsing. Syntax description approaches. Phrase structure grammar: the principles. Probabilistic context-free grammar. Cocke-Kasami-Younger algorithm for PCFG (without proof), its complexity. Parsing quality evaluation.
  20. Syntax parsing. Syntax description approaches. Dependency grammar, core principles. Parsing quality evaluation. Transition-based dependency parsing: how it works. The algorithm (everything but the 'oracle').