top of page

25% Discount For All Pricing Plans "welcome"

Understanding Dimensionality in Data Science



I. Understanding Dimensionality in Data


A. The Curse of Dimensionality: Introduction


Dimensionality in data science refers to the number of attributes or variables that are used to describe an observation or instance in a dataset. In simple terms, imagine a point on a plane. With one attribute, you can describe its location on a line (1D). With two attributes, you can locate it on a surface (2D), and with three, you can place it in space (3D). Now, imagine you have hundreds or even thousands of attributes. You are now dealing with higher dimensions, which is not easy to visualize or process.


This leads us to the "Curse of Dimensionality," a term that refers to various phenomena that arise when dealing with high-dimensional data. As the dimensionality increases, the volume of the space increases exponentially, and data becomes sparse. This can lead to a loss of information, difficulty in visualization, and increased computational complexity.


Creating a Random High-Dimensional Data

import numpy as np

# Create a random dataset with 100 observations and 1000 dimensions
data = np.random.rand(100, 1000)

# Shape of the dataset
print("Shape of the dataset:", data.shape)

Output

Shape of the dataset: (100, 1000)


B. Problems with High-Dimensional Datasets: Overfitting


With the increase in dimensions, the risk of overfitting becomes prominent. Overfitting occurs when a model is trained too well on the training data and captures the noise, losing its ability to generalize on unseen data.


Example Analogy


Imagine fitting a line through two points (1D). There's only one way to draw the line. Now, consider fitting a line through three points (2D). There's still a straightforward way. But what if you have hundreds of points? You can draw a complex curve that passes through each point perfectly. While this might seem like a good fit, it's too specific to those exact points and won't work well on new data.


Overfitting in High Dimensions

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2, random_state=42)

# Train a linear regression model
model = LinearRegression()
model.fit(X_train, y_train)

# Evaluate the model
train_error = mean_squared_error(y_train, model.predict(X_train))
test_error = mean_squared_error(y_test, model.predict(X_test))

print("Training error:", train_error)
print("Test error:", test_error)


C. Solutions to Reduce Dimensionality


Reducing dimensionality can alleviate some problems associated with high-dimensional data, such as overfitting and computational burden. There are several methods to achieve this:

  1. Feature Selection: Selecting a subset of the most important features.

  2. Feature Extraction: Combining features to create new ones that still capture the essential information.

  3. Dimensionality Reduction Algorithms: Applying techniques like Principal Component Analysis (PCA) to reduce dimensions.


Using PCA

from sklearn.decomposition import PCA

# Apply PCA to reduce the dimensionality to 10
pca = PCA(n_components=10)
reduced_data = pca.fit_transform(data)

# Shape of the reduced dataset
print("Shape of the reduced dataset:", reduced_data.shape)

Output

Shape of the reduced dataset: (100, 10)


D. Building Intuition on Model Overfitting


The intuition behind overfitting is that it is like studying too hard for an exam by memorizing all the questions and answers from the past papers. It may help you on those exact questions but fail you if the exam has different or new questions. Models that overfit are too focused on the training data and lose their ability to generalize.


II. Modeling House Prices: An Example


A. Predicting a City Based on House Features


In this section, we'll explore how to model house prices using various features. Let's assume we have a dataset containing details about houses in different cities. Features could include the number of bedrooms, bathrooms, surface area, floors, and more. Our task is to predict the city a house is located in based on these features.


Example Analogy


Imagine different cities as different types of cuisines. Each dish (house) can be characterized by various ingredients (features like bedrooms, bathrooms, etc.). By tasting a dish (analyzing house features), a food expert (model) should be able to tell the type of cuisine (city) it belongs to.


Loading House Data

import pandas as pd

# Load the dataset containing house features
data = pd.read_csv('house_data.csv')

# Display the first few rows
print(data.head())


B. Using Different Number of Observations to Avoid Overfitting


By using different numbers of observations (i.e., different subsets of our data), we can control the risk of overfitting. Too many features with too few observations can lead to overfitting. Conversely, increasing the observations can mitigate this issue.


Splitting Data by Observations

# Using 80% of data for training and 20% for testing
train_data = data.sample(frac=0.8, random_state=42)
test_data = data.drop(train_data.index)

print("Train shape:", train_data.shape)
print("Test shape:", test_data.shape)


C. Building a City Classifier


To predict the city based on house features, we'll build a city classifier. Here's how we'll proceed:

  1. Data Splitting (Train-Test Split): We'll divide our data into a training set and a test set.

  2. Model Fitting (Support Vector Machine Classifier): We'll use the Support Vector Machine (SVM) as our classification model.

  3. Prediction and Model Accuracy Assessment: We'll predict the cities for the test set and assess the model's accuracy.


1. Data Splitting (Train-Test Split)

from sklearn.model_selection import train_test_split

# Splitting features and target
X = train_data.drop('city', axis=1)
y = train_data['city']

# Splitting training and validation sets
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)


2. Model Fitting (Support Vector Machine Classifier)

from sklearn.svm import SVC

# Training an SVM classifier
model = SVC()
model.fit(X_train, y_train)


3. Prediction and Model Accuracy Assessment

from sklearn.metrics import accuracy_score

# Making predictions
predictions = model.predict(X_val)

# Assessing accuracy
accuracy = accuracy_score(y_val, predictions)
print("Accuracy:", accuracy)

Output

Accuracy: 0.89

This section provided a practical example of how dimensionality impacts the modeling process. We considered a realistic problem, predicting a city based on house features, and looked at how different numbers of observations could affect overfitting.


III. Enhancing Models by Adding Features


A. Importance of Adding More Features


Feature selection plays a crucial role in building accurate models. Selecting the right features can lead to better performance, while adding irrelevant or redundant features can cause overfitting.


Example Analogy


Think of features as ingredients in a recipe. The right blend of ingredients can create a delightful dish. Adding too many unrelated ingredients may ruin the taste. Similarly, in modeling, the right features enhance performance, while irrelevant features can lead to confusion and overfitting.


Adding New Features

# Adding a new feature, the total number of rooms
data['total_rooms'] = data['bedrooms'] + data['bathrooms']

# Display the first few rows with the new feature
print(data.head())


B. Selection of Features (Floors, Bathrooms, Surface Area)


Choosing the right features requires understanding the business context and data exploration. For our house price prediction, features like floors, bathrooms, and surface area might be essential.

Selecting Specific Features

# Selecting specific features
selected_features = ['floors', 'bathrooms', 'surface_area']
X = data[selected_features]

# Displaying the selected features
print(X.head())


C. Increasing Observations to Avoid Overfitting


Increasing the number of observations (samples) can help in avoiding overfitting, especially when working with a high-dimensional dataset.


Using More Observations

# Assuming a larger dataset is available
large_data = pd.read_csv('large_house_data.csv')

# Using the same selected features
X_large = large_data[selected_features]

# Observing the difference in the number of observations
print("Original dataset shape:", X.shape)
print("Larger dataset shape:", X_large.shape)


D. The Phenomenon of the Curse of Dimensionality


When too many features are added, the model may suffer from the "Curse of Dimensionality," where the data becomes sparse, leading to overfitting.


Example Analogy


Imagine trying to paint a picture with too many colors. It can become confusing, and the original image might get lost. Similarly, too many features can confuse the model, leading to poor performance.


E. Dimensionality Reduction Techniques


When faced with high dimensionality, we can apply techniques like Principal Component Analysis (PCA) to reduce dimensions without losing important information.


Applying PCA

from sklearn.decomposition import PCA

# Applying PCA to reduce dimensions
pca = PCA(n_components=2)
X_reduced = pca.fit_transform(X)

# Displaying the reduced dimensions
print("Reduced shape:", X_reduced.shape)

In this section, we explored how adding features affects the model, considering both the benefits and potential pitfalls. We learned about the importance of selecting relevant features and the dangers of high dimensionality, demonstrating techniques to navigate these challenges.


IV. Automated Feature Selection


A. Selecting Features Based on Variance and Missing Values


Features with very little variance or a high percentage of missing values might not contribute much to the model. Automating the selection of features based on these criteria can be highly efficient.


Removing Low Variance Features

from sklearn.feature_selection import VarianceThreshold

# Removing features with zero variance
selector = VarianceThreshold()
X_high_variance = selector.fit_transform(X)

# Displaying the shape after removing low variance features
print("Shape after removing low variance features:", X_high_variance.shape)


B. Creating a Feature Selector: Variance Threshold


A variance threshold can be set to remove features below a certain variance level,

ensuring that only features with significant variation are retained.


Setting a Variance Threshold

# Setting a specific threshold for variance
threshold = 0.2
selector = VarianceThreshold(threshold)
X_thresholded = selector.fit_transform(X)

# Displaying the shape after applying the threshold
print("Shape after applying threshold:", X_thresholded.shape)


C. Applying a Feature Selector to Reduce Dimensions


Applying feature selectors like the Variance Threshold not only reduces dimensions but also potentially enhances the model's performance.


Fitting a Model with Reduced Features

from sklearn.linear_model import LinearRegression

# Fitting a model with the reduced features
model = LinearRegression()
model.fit(X_thresholded, y)

# Displaying the model's coefficients
print("Model coefficients:", model.coef_)


D. Variance Selector Caveats and Normalization


While variance selectors are powerful, they require that the features be on the same scale. Normalization can address this issue.


Normalizing Features Before Selection

from sklearn.preprocessing import StandardScaler

# Normalizing the features
scaler = StandardScaler()
X_normalized = scaler.fit_transform(X)

# Applying variance selector on normalized features
selector = VarianceThreshold(threshold)
X_normalized_thresholded = selector.fit_transform(X_normalized)

# Displaying the shape
print("Shape after normalization and thresholding:", X_normalized_thresholded.shape)


E. Missing Value Selector


Handling missing values by setting a threshold for allowable missing values allows for automated feature selection, retaining only the most complete and informative features.


Identifying and Counting Missing Values

# Finding the percentage of missing values for each feature
missing_percentage = X.isnull().mean()

# Selecting features with missing values below a certain threshold
threshold_missing = 0.1
selected_features = missing_percentage[missing_percentage < threshold_missing].index

# Displaying the selected features
print("Selected features based on missing values:", selected_features)


F. Applying Missing Value Threshold


Applying a missing value threshold helps in retaining only the robust features, thus improving the model's reliability.


Applying Missing Value Threshold

# Applying the threshold to the DataFrame
X_missing_thresholded = X[selected_features]

# Displaying the shape after applying the missing value threshold
print("Shape after applying missing value threshold:", X_missing_thresholded.shape)


V. Analyzing Pairwise Correlation in Features


A. Introduction to Pairwise Correlation


Pairwise correlation is a measure of how two variables change together. If they tend to go up and down together, they are positively correlated; if one goes up while the other goes down, they are negatively correlated.


Example Analogy:


Think of two synchronized swimmers. If they perform perfectly in sync, their movements are positively correlated. If one performs the opposite movement to the other, they are negatively correlated.


B. Understanding Correlation Coefficients (r)


The correlation coefficient, often represented as 'r', quantifies the degree of correlation. It ranges from -1 (perfect negative correlation) to 1 (perfect positive correlation), with 0 indicating no correlation.


C. Creating a Correlation Matrix


A correlation matrix is a table showing the correlation coefficients between different variables. It helps identify relationships in the dataset.

import pandas as pd

# Creating a correlation matrix
correlation_matrix = X.corr()

# Displaying the correlation matrix
print(correlation_matrix)


D. Visualizing the Correlation Matrix


A visual representation can make the correlation matrix more interpretable.


Visualizing a Correlation Matrix

import seaborn as sns
import matplotlib.pyplot as plt

# Plotting the correlation matrix
sns.heatmap(correlation_matrix, annot=True)
plt.show()

Output: A heatmap will be displayed showing the pairwise correlation between the features.


E. Reducing Complexity by Removing Duplicate Information


Since a correlation matrix is symmetrical, half of the information is redundant. Removing duplicate information can reduce complexity.


Removing Duplicate Correlations

import numpy as np

# Masking the upper triangle of the correlation matrix
mask = np.triu(np.ones_like(correlation_matrix, dtype=bool))
reduced_correlation_matrix = correlation_matrix.mask(mask)

# Displaying the reduced correlation matrix
print(reduced_correlation_matrix)


VI. Removing Highly Correlated Features


A. Rationale for Removing Perfectly Correlated Features


Highly correlated features often contain redundant information, potentially leading to multicollinearity problems. Removing these features can improve the model's interpretability.


B. Identifying and Handling Highly Correlated Data


You can identify highly correlated features and remove them to avoid multicollinearity.


Identifying and Removing Highly Correlated Features

# Identifying highly correlated features
high_correlation_features = [column for column in reduced_correlation_matrix.columns if any(reduced_correlation_matrix[column] > 0.9)]

# Removing highly correlated features
X_reduced = X.drop(columns=high_correlation_features)

# Displaying the shape after removing highly correlated features
print("Shape after removing highly correlated features:", X_reduced.shape)


Conclusion


Throughout this comprehensive tutorial, we've taken a deep dive into the intricate concepts of dimensionality in data, the significance of features and their selection, the phenomenon of overfitting, and the relevance of analyzing pairwise correlations.


Beginning with an understanding of the curse of dimensionality, we explored how high-dimensional datasets can lead to overfitting and the solutions to mitigate these challenges. Through the example of modeling house prices, we introduced techniques to construct classifiers and evaluate their accuracy.


Further, we looked into enhancing models by adding features, automated feature selection, and the intricacies involved in these processes. Emphasizing the importance of feature variance, missing values, and correlation, we explored various techniques and code snippets to handle them effectively.


Visuals and code examples provided throughout the tutorial illustrated the concepts in a way that facilitated a hands-on approach. They allowed for a practical understanding of how to apply these techniques in real-world scenarios.


This tutorial aimed to guide both beginners and seasoned data scientists through essential concepts in data analysis, feature selection, and dimensionality reduction. By mastering these concepts, you're now better equipped to lead digital data-driven transformations and create models that are more efficient, accurate, and interpretable.


Thank you for taking this journey through Python and data science with me. I hope you found this tutorial enlightening and that it serves as a valuable resource in your future endeavors.

bottom of page