29 KiB
Feature Extraction from Text¶
This notebook is divided into two sections:
- First, we'll find out what what is necessary to build an NLP system that can turn a body of text into a numerical array of features by manually calcuating frequencies and building out TF-IDF.
- Next we'll show how to perform these steps using scikit-learn tools.
Part One: Core Concepts on Feature Extraction¶
In this section we'll use basic Python to build a rudimentary NLP system. We'll build a corpus of documents (two small text files), create a vocabulary from all the words in both documents, and then demonstrate a Bag of Words technique to extract features from each document.
Don't worry about memorizing this code - later on we will let Scikit-Learn Preprocessing tools do this for us.
Start with some documents:¶
For simplicity we won't use any punctuation in the text files One.txt and Two.txt. Let's quickly open them and read them. Keep in mind, you should avoid opening and reading entire files if they are very large, as Python could just display everything depending on how you open the file.
with open('One.txt') as mytext:
print(mytext.read())
with open('Two.txt') as mytext:
print(mytext.read())
Reading entire text as a string¶
with open('One.txt') as mytext:
entire_text = mytext.read()
entire_text
print(entire_text)
Reading Each Line as a List¶
with open('One.txt') as mytext:
lines = mytext.readlines()
lines
Reading in Words Separately¶
with open('One.txt') as f:
words = f.read().lower().split()
words
Building a vocabulary (Creating a "Bag of Words")¶
Let's create dictionaries that correspond to unique mappings of the words in the documents. We can begin to think of this as mapping out all the possible words available for all (both) documents.
with open('One.txt') as f:
words_one = f.read().lower().split()
words_one
len(words_one)
uni_words_one = set(words)
uni_words_one
Repeat for Two.txt
with open('Two.txt') as f:
words_two = f.read().lower().split()
uni_words_two = set(words_two)
uni_words_two
Get all unique words across all documents
all_uni_words = set()
all_uni_words.update(uni_words_one)
all_uni_words.update(uni_words_two)
all_uni_words
full_vocab = dict()
i = 0
for word in all_uni_words:
full_vocab[word] = i
i = i+1
# Do not expect this to be in alphabetical order!
# The for loop goes through the set() in the most efficient way possible, not in alphabetical order!
full_vocab
Bag of Words to Frequency Counts¶
Now that we've encapsulated our "entire language" in a dictionary, let's perform feature extraction on each of our original documents:
Empty counts per doc
# Create an empty vector with space for each word in the vocabulary:
one_freq = [0]*len(full_vocab)
two_freq = [0]*len(full_vocab)
all_words = ['']*len(full_vocab)
one_freq
two_freq
all_words
for word in full_vocab:
word_ind = full_vocab[word]
all_words[word_ind] = word
all_words
Add in counts per word per doc:
# map the frequencies of each word in 1.txt to our vector:
with open('One.txt') as f:
one_text = f.read().lower().split()
for word in one_text:
word_ind = full_vocab[word]
one_freq[word_ind]+=1
one_freq
# Do the same for the second document:
with open('Two.txt') as f:
two_text = f.read().lower().split()
for word in two_text:
word_ind = full_vocab[word]
two_freq[word_ind]+=1
two_freq
pd.DataFrame(data=[one,two],columns=all_words)
By comparing the vectors we see that some words are common to both, some appear only in One.txt
, others only in Two.txt
. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them sparse matrices.
Concepts to Consider:¶
Bag of Words and Tf-idf¶
In the above examples, each vector can be considered a bag of words. By itself these may not be helpful until we consider term frequencies, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.
However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider inverse document frequency, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described here.
Together these terms become tf-idf.
Stop Words and Word Stems¶
Some words like "the" and "and" appear so frequently, and in so many documents, that we needn't bother counting them. Also, it may make sense to only record the root of a word, say cat
in place of both cat
and cats
. This will shrink our vocab array and improve performance.
Tokenization and Tagging¶
When we created our vectors the first thing we did was split the incoming text on whitespace with .split()
. This was a crude form of tokenization - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated morphology to parse text appropriately.
Once the text is divided, we can go back and tag our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become *high dimensional sparse matrices*.
Part Two: Feature Extraction with Scikit-Learn¶
Let's explore the more realistic process of using sklearn to complete the tasks mentioned above!
Scikit-Learn's Text Feature Extraction Options¶
text = ['This is a line',
"This is another line",
"Completely different line"]
CountVectorizer¶
from sklearn.feature_extraction.text import TfidfTransformer,TfidfVectorizer,CountVectorizer
cv = CountVectorizer()
cv.fit_transform(text)
sparse_mat = cv.fit_transform(text)
sparse_mat.todense()
cv.vocabulary_
cv = CountVectorizer(stop_words='english')
cv.fit_transform(text).todense()
cv.vocabulary_
TfidfTransformer¶
TfidfVectorizer is used on sentences, while TfidfTransformer is used on an existing count matrix, such as one returned by CountVectorizer
tfidf_transformer = TfidfTransformer()
cv = CountVectorizer()
counts = cv.fit_transform(text)
counts
tfidf = tfidf_transformer.fit_transform(counts)
tfidf.todense()
from sklearn.pipeline import Pipeline
pipe = Pipeline([('cv',CountVectorizer()),('tfidf',TfidfTransformer())])
results = pipe.fit_transform(text)
results
results.todense()
TfIdfVectorizer¶
Does both above in a single step!
tfidf = TfidfVectorizer()
new = tfidf.fit_transform(text)
new.todense()