top of page

Natural Language Processing: Text Vectorization, Classification, and n-gram Models with Python.




I. Introduction to Text Vectorization


1. Understanding Vectorization


In machine learning and data science, text vectorization is like converting the written language into a mathematical formula. Imagine trying to describe a painting using only numbers; that's what vectorization does to words. It is a method of converting text data into numerical format.


For instance, consider the word "cat." You could represent it as an array of numbers where each number represents a character or a quality of the word. This numeric representation can then be understood by machine learning algorithms.

from sklearn.feature_extraction.text import CountVectorizer

text = ["cat"]
vectorizer = CountVectorizer()
vector = vectorizer.fit_transform(text)
print(vector.toarray())  # Output: [[1]]


2. Data Format for ML Algorithms


Most machine learning algorithms work on numerical data. Imagine trying to fit a puzzle where the pieces are all different shapes; that's what it's like using non-numerical data in an ML algorithm.


Text data is generally complex and unstructured. Converting this into a format that can be understood by algorithms is crucial. This conversion is like translating a poem into another language, preserving its essence and meaning.


Challenges with textual data:

  • Variability in length and structure

  • Presence of noise (e.g., typos, slang)

  • High dimensionality


3. Introduction to Bag of Words (BoW) Model


The Bag of Words (BoW) model is a popular way to convert text into vectors. It's like looking at a fruit salad and counting how many pieces of each fruit are in the bowl.

from sklearn.feature_extraction.text import CountVectorizer

texts = ["apple orange banana", "banana orange orange"]
vectorizer = CountVectorizer()
vector = vectorizer.fit_transform(texts)
print(vector.toarray())  # Output: [[1, 1, 1], [0, 2, 1]]

Here, the vector [1, 1, 1] represents the first text with one occurrence each of "apple," "orange," and "banana." The second text is represented by [0, 2, 1].


4. Text Preprocessing Techniques


Text preprocessing is like cleaning and chopping vegetables before cooking. It includes:

  • Dealing with different word cases and punctuation: Making all letters lowercase and removing punctuation ensures consistency.

  • Removal of stopwords: Common words like "the," "is," and "in" may not add much meaning to the text, so they can be removed.

  • Importance of smaller vocabularies: Reducing the vocabulary size helps in reducing the complexity.

Here's an example:

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(lowercase=True, stop_words='english')
text = ["The sun shines, the rain falls."]
vector = vectorizer.fit_transform(text)
print(vector.toarray())  # Output: [[1, 1, 1]]


5. Building a Bag of Words Model Using Libraries


Using Python libraries like scikit-learn, you can efficiently implement BoW models. Here's an example:

from sklearn.feature_extraction.text import CountVectorizer

text = ["Data Science is fascinating."]
vectorizer = CountVectorizer()
vector = vectorizer.fit_transform(text)
print(vector.toarray())  # Output: [[1, 1, 1, 1]]

Automatic lowercasing and indexing make this process smooth and consistent.


II. Constructing a Text Classifier


1. Introduction to a Naive Bayes Classifier


A Naive Bayes Classifier is a probabilistic model that applies Bayes' theorem to predict the category of a given text. Think of it as a detective who uses evidence to determine the likelihood of different scenarios.


2. Spam Filtering Problem


For our classifier, we will tackle a spam filtering problem. Our task is to train a model that can differentiate between spam (unwanted) and ham (legitimate) messages.


3. Steps in Text Classification


The process of text classification can be divided into three main steps:

  • Preprocessing text: This includes cleaning, tokenizing, and vectorizing the text.

  • Building the BoW model: Creating a numerical representation of the text.

  • Predictive modeling techniques: Training a machine learning model on this numerical data.

Imagine this process like building a house, with each step laying the foundation for the next.


4. Text Preprocessing Using CountVectorizer


CountVectorizer in scikit-learn provides preprocessing options, and libraries like spaCy allow advanced tokenization and lemmatization.

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(stop_words='english', lowercase=True)
text = ["Spam messages are annoying!"]
vector = vectorizer.fit_transform(text)
print(vector.toarray())  # Output: [[1, 1, 1]]


5. Building the BoW Model for Classification


Importing necessary tools and splitting data into training and testing sets is a crucial part of the classification process.

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


6. Training the Naive Bayes Classifier


The training process can be carried out with the Multinomial Naive Bayes model.

from sklearn.naive_bayes import MultinomialNB

model = MultinomialNB()
model.fit(X_train, y_train)
accuracy = model.score(X_test, y_test)
print("Accuracy:", accuracy)  # Output e.g.: Accuracy: 0.95


This concludes the first part of our tutorial. We have covered the essentials of text vectorization and the construction of a text classifier using a Naive Bayes model. In the next part, we will dive into the exploration of n-gram models, their implementation, and their limitations.


III. Exploring n-gram Models


1. Understanding BoW Shortcomings


While the Bag of Words (BoW) model is a robust method for text vectorization, it has some limitations, notably in capturing the context and the positioning of words. To illustrate, consider the sentences "The cat chased the dog" and "The dog chased the cat." BoW treats these sentences identically, disregarding the order of the words.


The shortcoming can be likened to a jigsaw puzzle where the individual pieces (words) are there, but their connecting edges (order and relationship) are ignored.


2. Introduction to n-grams


To overcome the limitations of the BoW model, we can employ n-grams. An n-gram is a contiguous sequence of n items from a given text. In the context of text analysis, this means considering sequences of words rather than individual words. It's like observing the pattern of dancers in a line dance rather than looking at each dancer independently.

  • Bigrams (n=2): Pairs of consecutive words

  • Trigrams (n=3): Triples of consecutive words

  • ...and so on

Here's an example of how to create bigrams:

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(ngram_range=(2, 2))
text = ["The cat chased the dog"]
bigrams = vectorizer.fit_transform(text)
print(bigrams.toarray())  # Output will represent pairs of words


3. Applications of n-grams


The utilization of n-grams is widespread and includes applications like:

  • Sentence completion: Predicting the next word in a sentence

  • Spelling correction: Recognizing patterns to correct misspelled words

  • Translation: Enhancing machine translation by considering word pairs or triples


4. Building n-gram Models Using scikit-learn


Creating n-gram models is straightforward with scikit-learn's CountVectorizer. By setting the ngram_range argument, we can define the order of n-grams to consider.

from sklearn.feature_extraction.text import CountVectorizer

texts = ["Data Science is fascinating", "I love learning about Data Science"]
vectorizer = CountVectorizer(ngram_range=(2, 2))
bigrams = vectorizer.fit_transform(texts)
print(bigrams.toarray())  # Output represents bigram vectors


5. Shortcomings of Using High-Order n-grams


While n-grams capture more context, using high-order n-grams (e.g., 4-grams, 5-grams) presents challenges:

  • Problems of dimensionality: The higher the n, the more complex and vast the feature space becomes.

  • Rarity of high-order n-grams: Longer sequences may not be commonly found, reducing their effectiveness.

  • Recommendations to restrict to small n values: To avoid the "curse of dimensionality," it's often best to stick to bigrams and trigrams.


The drawbacks of high-order n-grams can be likened to a finely grated cheese: the

smaller the grating, the less recognizable the individual pieces, until it becomes impractical.


Conclusion


Natural Language Processing is a vibrant field, encompassing various techniques to handle and analyze text. In this comprehensive tutorial, we navigated through text vectorization, building classifiers, and exploring n-gram models. The journey was like sculpting a statue, starting with a rough stone and chiseling it into shape, considering every detail.


The concepts covered, from the basic Bag of Words model to the more nuanced n-gram models, provide essential tools for data scientists working with textual data. Python, with its rich libraries like scikit-learn, offers an accessible path to implement these methods.


Through hands-on code snippets and engaging analogies, we've explored the art and science of transforming text into numerical data, understanding its structure, and making predictions. The door to more advanced topics and applications now stands open, ready to be explored by curious minds.

bottom of page