Foundation NLP
Last updated
Was this helpful?
Last updated
Was this helpful?
d2v - tutorial with code and notebook.
Logistic regression with word ngrams
Logistic regression with character ngrams
Logistic regression with word and character ngrams
Recurrent neural network (bidirectional GRU) without pre-trained embeddings
Recurrent neural network (bidirectional GRU) with GloVe pre-trained embeddings
Multi channel Convolutional Neural Network
RNN (Bidirectional GRU) + CNN model
LexNLP -
For a given word, using Vocabulary, you can get its
Meaning
Synonyms
Antonyms
Part of speech : whether the word is a noun, interjection or an adverb et el
Translate : Translate a phrase from a source language to the desired language.
Usage example : a quick example on how to use the word in a sentence
Pronunciation
Hyphenation : shows the particular stress points(if any)
How to measure a stemmer?
Phrase modeling is another approach to learning combinations of tokens that together represent meaningful multi-word concepts. We can develop phrase models by looping over the the words in our reviews and looking for words that co-occur (i.e., appear one after another) together much more frequently than you would expect them to by random chance. The formula our phrase models will use to determine whether two tokens AA and BB constitute a phrase is:
count(A B)−countmincount(A)∗count(B)∗N>threshold
using . Using nltk or stanford pos taggers, creating features from actual words (manual stemming, etc0 using the tags as labels, on a random forest, thus creating a classifier for POS on our own. Not entirely sure why we need to create a classifier from a “classifier”.
- POS, lemmatize, synon, antonym, hypernym, hyponym
- using synonyms cumsum for comparison. Today replaced with w2v mean sentence similarity.
- stemmers are faster, lemmatizers are POS / dictionary based, slower, converting to base form.
- shallow parsing, compared to deep, similar to NER
using nltk chunking as a labeller for a classifier, training one of our own. Using IOB features as well as others to create a new ner classifier which should be better than the original by using additional features. Aso uses a new english dataset GMB.
corpuses
Python Module to get Meanings, Synonyms and what not for a given word using vocabulary (also a comparison against word net)
is a Python library for performing a variety of natural language processing (NLP) tasks, built on the high-performance spacy library. With the fundamentals — tokenization, part-of-speech tagging, dependency parsing, etc. — delegated to another library, textacy focuses on the tasks that come before and follow after.
What is collocation? - “the habitual juxtaposition of a particular word with another word or words with a frequency greater than chance.”Medium , quite good, comparing freq/t-test/pmi/chi2 with github code
A website dedicated to , methods, references, metrics.
a tutorial with chi2(IG?),
in R - has ideas on how to use collocations, for downstream tasks, LDA, W2V, etc. also explains about PMI and other metrics, note that gensim metric is unsupervised and probablistic.
NLTK on
A about keeping or removing stopwords for collocation, usefull but no firm conclusion. Imo we should remove it before
A with code of using nltk-based collocation
Small code for using nltk
Another code / score example for nltk
Jupyter notebook on - not useful
Paper: - We introduce ngrams into four representation methods. The experimental results demonstrate ngrams’ effectiveness for learning improved word representations. In addition, we find that the trained ngram embeddings are able to reflect their semantic meanings and syntactic patterns. To alleviate the costs brought by ngrams, we propose a novel way of building co-occurrence matrix, enabling the ngram-based models to run on cheap hardware
Youtube on , , mutual info and
- 55 languages af, ar, bg, bn, ca, cs, cy, da, de, el, en, es, et, fa, fi, fr, gu, he, hi, hr, hu, id, it, ja, kn, ko, lt, lv, mk, ml, mr, ne, nl, no, pa, pl, pt, ro, ru, sk, sl, so, sq, sv, sw, ta, te, th, tl, tr, uk, ur, vi, zh-cn, zh-tw
References [ (apr11) (Index compression factor ICF) ]
- using gensim and spacy
last update 7y ago
, and other ,
,
(morphological analysis, normalization etc),
,
-