NLP course at MCS Department of SPbU (2023)

NLP, SPbU, Spring 2023

Exam questions

  1. Zipf's Law, its importance for NLP. Language processing in information retrieval: lemmatization, stemming, Boolean search, inverted indices, execution of Boolean queries on them, skip-lists.
  2. Language processing in information retrieval: vector space model, cosine distance, TF-IDF. Common ways of representing texts for machine learning tasks.
  3. Neural Networks: core principles, backpropagation, common optimizers, regularization techniques.
  4. String distances and the algorithms for their computation: the Hamming distance, the Jaro-Winkler distance, the Levenshtein distance, the longest common subsequence, the Jaccard distance for character N-grams. Indices for typos detection/correction in words.
  5. Markov chains. Ergodic theorem. PageRank and Markov chains. Direct applications in the text analysis.
  6. Elements of information theory: self-information, bit, pointwise mutual information, Kullback-Leibler divergence, Shannon entropy, its interpretations. Cross-entropy. Example of an application: collocations extraction.
  7. Language modeling. N-gram models. Perplexity. The reasons for doing smoothing. Additive (Laplace) smoothing. Interpolation and backoff. The ideas on which the Kneser-Ney smoothing is based.
  8. Language modeling. Probabilistic Neural Language Model (2003). AWD-LSTM (2017). Perplexity.
  9. Vector semantics: term-document matrices, term-context matrices, HAL. SVD, LSA, NMF. Methods for quality evaluation of vector semantics models.
  10. Vector semantics: what is word2vec (the core principles of the SGNS algorithm and its relationship with matrix factorization), word2vec as a neural network. Methods for quality evaluation of vector semantics models.
  11. Clustering: types of clustering algorithms. KMeans, agglomerative and divisive clustering (+ ways of estimating the distances between clusters), DBSCAN. Limitations and areas of applicability of all algorithms. Methods clustering quality evaluation, the shortcomings of each.
  12. Duplicates search: statement of the problem, description of the MinHash algorithm. Probability of hashes matching is equal to Jaccard similarity (with proof).
  13. Topic modeling. LSA, pLSA, LDA, ARTM. Advantages and disadvantages of each method. Topic modeling quality evaluation (perplexity, coherence and methods with experts involved).
  14. Topic modeling + neural TM. pLSA, NTM, ABAE. Advantages and disadvantages of each method. Topic modeling quality evaluation (perplexity, coherence and methods with experts involved).
  15. Sequence tagging. PoS tagging. Named entity recognition. Hidden Markov models. Estimation of the probability of a sequence of states. Estimation of the probability of a sequence of observations. Quality evaluation.
  16. Sequence tagging. PoS tagging. Named entity recognition. Hidden Markov models. Decoding of the most probable sequence of states (Veterbi algorithm without proof). Quality evaluation.
  17. Sequence tagging. PoS tagging. Named entity recognition. Structured perceptron. Structured perceptron training. Sequente tagging quality evaluation.
  18. Neural sequence tagging. Simple RNN aproach, bidirectional RNNs, biLSTM-CRF.
  19. Syntax parsing. Syntax description approaches. Phrase structure grammar: the principles. Formal grammar. Chomsky Normal Form. Cocke-Kasami-Younger algorithm, its complexity. Parsing quality evaluation.
  20. Syntax parsing. Syntax description approaches. Phrase structure grammar: the principles. Probabilistic context-free grammar. Cocke-Kasami-Younger algorithm for PCFG (without proof), its complexity. Parsing quality evaluation.
  21. Syntax parsing. Syntax description approaches. Dependency grammar, core principles. Parsing quality evaluation. Transition-based dependency parsing: how it works. The algorithm (everything but the 'oracle').
  22. Encoding-decoding approach in NLP. OOV tokens processing. 'Transformer' architecture.
  23. Transfer learning. ELMo. BERT.