You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

29 KiB

<html> <head> </head>

___

Copyright by Pierian Data Inc. For more information, visit us at www.pieriandata.com

Feature Extraction from Text

This notebook is divided into two sections:

  • First, we'll find out what what is necessary to build an NLP system that can turn a body of text into a numerical array of features by manually calcuating frequencies and building out TF-IDF.
  • Next we'll show how to perform these steps using scikit-learn tools.

Part One: Core Concepts on Feature Extraction

In this section we'll use basic Python to build a rudimentary NLP system. We'll build a corpus of documents (two small text files), create a vocabulary from all the words in both documents, and then demonstrate a Bag of Words technique to extract features from each document.

This first section is for illustration only!
Don't worry about memorizing this code - later on we will let Scikit-Learn Preprocessing tools do this for us.

Start with some documents:

For simplicity we won't use any punctuation in the text files One.txt and Two.txt. Let's quickly open them and read them. Keep in mind, you should avoid opening and reading entire files if they are very large, as Python could just display everything depending on how you open the file.

In [57]:
with open('One.txt') as mytext:
    print(mytext.read())
This is a story about dogs
our canine pets
Dogs are furry animals

In [58]:
with open('Two.txt') as mytext:
    print(mytext.read())
This story is about surfing
Catching waves is fun
Surfing is a popular water sport

Reading entire text as a string

In [59]:
with open('One.txt') as mytext:
    entire_text = mytext.read()
In [60]:
entire_text
Out[60]:
'This is a story about dogs\nour canine pets\nDogs are furry animals\n'
In [61]:
print(entire_text)
This is a story about dogs
our canine pets
Dogs are furry animals

Reading Each Line as a List

In [62]:
with open('One.txt') as mytext:
    lines = mytext.readlines()
In [63]:
lines
Out[63]:
['This is a story about dogs\n',
 'our canine pets\n',
 'Dogs are furry animals\n']

Reading in Words Separately

In [64]:
with open('One.txt') as f:
    words = f.read().lower().split()
In [65]:
words
Out[65]:
['this',
 'is',
 'a',
 'story',
 'about',
 'dogs',
 'our',
 'canine',
 'pets',
 'dogs',
 'are',
 'furry',
 'animals']

Building a vocabulary (Creating a "Bag of Words")

Let's create dictionaries that correspond to unique mappings of the words in the documents. We can begin to think of this as mapping out all the possible words available for all (both) documents.

In [83]:
with open('One.txt') as f:
    words_one = f.read().lower().split()
In [84]:
words_one
Out[84]:
['this',
 'is',
 'a',
 'story',
 'about',
 'dogs',
 'our',
 'canine',
 'pets',
 'dogs',
 'are',
 'furry',
 'animals']
In [85]:
len(words_one)
Out[85]:
13
In [86]:
uni_words_one = set(words)
In [87]:
uni_words_one
Out[87]:
{'a',
 'about',
 'animals',
 'are',
 'canine',
 'dogs',
 'furry',
 'is',
 'our',
 'pets',
 'story',
 'this'}

Repeat for Two.txt

In [88]:
with open('Two.txt') as f:
    words_two = f.read().lower().split()
    uni_words_two = set(words_two)
In [89]:
uni_words_two
Out[89]:
{'a',
 'about',
 'catching',
 'fun',
 'is',
 'popular',
 'sport',
 'story',
 'surfing',
 'this',
 'water',
 'waves'}

Get all unique words across all documents

In [91]:
all_uni_words = set()
all_uni_words.update(uni_words_one)
all_uni_words.update(uni_words_two)
In [93]:
all_uni_words
Out[93]:
{'a',
 'about',
 'animals',
 'are',
 'canine',
 'catching',
 'dogs',
 'fun',
 'furry',
 'is',
 'our',
 'pets',
 'popular',
 'sport',
 'story',
 'surfing',
 'this',
 'water',
 'waves'}
In [94]:
full_vocab = dict()
i = 0

for word in all_uni_words:
    full_vocab[word] = i
    i = i+1
In [96]:
# Do not expect this to be in alphabetical order! 
# The for loop goes through the set() in the most efficient way possible, not in alphabetical order!
full_vocab
Out[96]:
{'water': 0,
 'sport': 1,
 'canine': 2,
 'pets': 3,
 'about': 4,
 'catching': 5,
 'dogs': 6,
 'furry': 7,
 'fun': 8,
 'story': 9,
 'is': 10,
 'our': 11,
 'surfing': 12,
 'animals': 13,
 'are': 14,
 'this': 15,
 'popular': 16,
 'a': 17,
 'waves': 18}

Bag of Words to Frequency Counts

Now that we've encapsulated our "entire language" in a dictionary, let's perform feature extraction on each of our original documents:

Empty counts per doc

In [126]:
# Create an empty vector with space for each word in the vocabulary:
one_freq = [0]*len(full_vocab)
two_freq = [0]*len(full_vocab)
all_words = ['']*len(full_vocab)
In [127]:
one_freq
Out[127]:
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
In [128]:
two_freq
Out[128]:
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
In [129]:
all_words
Out[129]:
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
In [130]:
for word in full_vocab:
    word_ind = full_vocab[word]
    all_words[word_ind] = word
In [131]:
all_words
Out[131]:
['water',
 'sport',
 'canine',
 'pets',
 'about',
 'catching',
 'dogs',
 'furry',
 'fun',
 'story',
 'is',
 'our',
 'surfing',
 'animals',
 'are',
 'this',
 'popular',
 'a',
 'waves']

Add in counts per word per doc:

In [132]:
# map the frequencies of each word in 1.txt to our vector:
with open('One.txt') as f:
    one_text = f.read().lower().split()
    
for word in one_text:
    word_ind = full_vocab[word]
    one_freq[word_ind]+=1
In [133]:
one_freq
Out[133]:
[0, 0, 1, 1, 1, 0, 2, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0]
In [134]:
# Do the same for the second document:
with open('Two.txt') as f:
    two_text = f.read().lower().split()
    
for word in two_text:
    word_ind = full_vocab[word]
    two_freq[word_ind]+=1
In [135]:
two_freq
Out[135]:
[1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 3, 0, 2, 0, 0, 1, 1, 1, 1]
In [141]:
pd.DataFrame(data=[one,two],columns=all_words)
Out[141]:
water sport canine pets about catching dogs furry fun story is our surfing animals are this popular a waves
0 0 0 1 1 1 0 2 1 0 1 1 1 0 1 1 1 0 1 0
1 1 1 0 0 1 1 0 0 1 1 3 0 2 0 0 1 1 1 1

By comparing the vectors we see that some words are common to both, some appear only in One.txt, others only in Two.txt. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them sparse matrices.

Concepts to Consider:

Bag of Words and Tf-idf

In the above examples, each vector can be considered a bag of words. By itself these may not be helpful until we consider term frequencies, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.

However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider inverse document frequency, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described here.

Together these terms become tf-idf.

Stop Words and Word Stems

Some words like "the" and "and" appear so frequently, and in so many documents, that we needn't bother counting them. Also, it may make sense to only record the root of a word, say cat in place of both cat and cats. This will shrink our vocab array and improve performance.

Tokenization and Tagging

When we created our vectors the first thing we did was split the incoming text on whitespace with .split(). This was a crude form of tokenization - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated morphology to parse text appropriately.

Once the text is divided, we can go back and tag our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become *high dimensional sparse matrices*.

Part Two: Feature Extraction with Scikit-Learn

Let's explore the more realistic process of using sklearn to complete the tasks mentioned above!

Scikit-Learn's Text Feature Extraction Options

In [185]:
text = ['This is a line',
           "This is another line",
       "Completely different line"]

CountVectorizer

In [186]:
from sklearn.feature_extraction.text import TfidfTransformer,TfidfVectorizer,CountVectorizer
In [187]:
cv = CountVectorizer()
In [188]:
cv.fit_transform(text)
Out[188]:
<3x6 sparse matrix of type '<class 'numpy.int64'>'
	with 10 stored elements in Compressed Sparse Row format>
In [189]:
sparse_mat = cv.fit_transform(text)
In [190]:
sparse_mat.todense()
Out[190]:
matrix([[0, 0, 0, 1, 1, 1],
        [1, 0, 0, 1, 1, 1],
        [0, 1, 1, 0, 1, 0]], dtype=int64)
In [191]:
cv.vocabulary_
Out[191]:
{'this': 5, 'is': 3, 'line': 4, 'another': 0, 'completely': 1, 'different': 2}
In [192]:
cv = CountVectorizer(stop_words='english')
In [193]:
cv.fit_transform(text).todense()
Out[193]:
matrix([[0, 0, 1],
        [0, 0, 1],
        [1, 1, 1]], dtype=int64)
In [194]:
cv.vocabulary_
Out[194]:
{'line': 2, 'completely': 0, 'different': 1}

TfidfTransformer

TfidfVectorizer is used on sentences, while TfidfTransformer is used on an existing count matrix, such as one returned by CountVectorizer

In [206]:
tfidf_transformer = TfidfTransformer()
In [207]:
cv = CountVectorizer()
In [208]:
counts = cv.fit_transform(text)
In [209]:
counts
Out[209]:
<3x6 sparse matrix of type '<class 'numpy.int64'>'
	with 10 stored elements in Compressed Sparse Row format>
In [210]:
tfidf = tfidf_transformer.fit_transform(counts)
In [211]:
tfidf.todense()
Out[211]:
matrix([[0.        , 0.        , 0.        , 0.61980538, 0.48133417,
         0.61980538],
        [0.63174505, 0.        , 0.        , 0.4804584 , 0.37311881,
         0.4804584 ],
        [0.        , 0.65249088, 0.65249088, 0.        , 0.38537163,
         0.        ]])
In [212]:
from sklearn.pipeline import Pipeline
In [215]:
pipe = Pipeline([('cv',CountVectorizer()),('tfidf',TfidfTransformer())])
In [219]:
results = pipe.fit_transform(text)
In [220]:
results
Out[220]:
<3x6 sparse matrix of type '<class 'numpy.float64'>'
	with 10 stored elements in Compressed Sparse Row format>
In [218]:
results.todense()
Out[218]:
matrix([[0.        , 0.        , 0.        , 0.61980538, 0.48133417,
         0.61980538],
        [0.63174505, 0.        , 0.        , 0.4804584 , 0.37311881,
         0.4804584 ],
        [0.        , 0.65249088, 0.65249088, 0.        , 0.38537163,
         0.        ]])

TfIdfVectorizer

Does both above in a single step!

In [202]:
tfidf = TfidfVectorizer()
In [203]:
new = tfidf.fit_transform(text)
In [204]:
new.todense()
Out[204]:
matrix([[0.        , 0.        , 0.        , 0.61980538, 0.48133417,
         0.61980538],
        [0.63174505, 0.        , 0.        , 0.4804584 , 0.37311881,
         0.4804584 ],
        [0.        , 0.65249088, 0.65249088, 0.        , 0.38537163,
         0.        ]])
</html>