Recommendation System Using Matrix Factorization

Published On: 11 April 2020.By .
  • Data, AI & Analytics
  • General

Model Based Collaborative Filtering:

Model based collaborative approaches only rely on user-item interactions information and assume a latent model supposed to explain these interactions. For example, matrix factorization algorithms consists in decomposing the huge and sparse user-item interaction matrix into a product of two smaller and dense matrices: a user-factor matrix (containing users representations) that multiplies a factor-item matrix (containing items representations).

Matrix Factorization:

The main assumption behind matrix factorization is that there exists a pretty low dimensional latent space of features in which we can represent both users and items and such that the interaction between a user and an item can be obtained by computing the dot product of corresponding dense vectors in that space.

Since sparsity and scalability are the two biggest challenges for standard CF method, it comes a more advanced method that decompose the original sparse matrix to low-dimensional matrices with latent factors/features and less sparsity. That is Matrix Factorization.

What matrix factorization eventually gives us is how much a user is aligned with a set of latent features, and how much a movie fits into this set of latent features. The advantage of it over standard nearest neighborhood is that even though two users haven’t rated any same movies, it’s still possible to find the similarity between them if they share the similar underlying tastes, again latent features.

To see how a matrix being factorized, first thing to understand is Singular Value Decomposition(SVD). Based on Linear Algebra, any real matrix R can be decomposed into 3 matrices U, Σ, and V. Continuing using movie example, U is an n Ă— r user-latent feature matrix, V is an m Ă— r movie-latent feature matrix. Σ is an r Ă— r diagonal matrix containing the singular values of original matrix, simply representing how important a specific feature is to predict user preference.

Matrix Factorization is simply a mathematical tool for playing around with matrices.

The matrix factorization techniques are usually more effective, because they allow users to discover the latent (hidden) features underlying the interactions between users and items(books).

We use SVD(Singular Value Decomposition) -> one of the matrix factorization models for identifying latent factors.

In our Books Recommendation system, we convert our usa_canada_user_rating table into a 2D Matrix called a (Utility Matrix) here. And fill the missing values with zeroes. That 2D Matrix will be into the form of Pivot Table in which index are userID and Columns are Book Title.

After this, we then, transpose this utility matrix, so that the booktitle become rows and userId become columns.

After using TruncatedSVD, to decompose it.

We fit it into the model for dimensionality reduction.

This Compression happened on the DataFrame columns since we must preserve the book titles.

Then, In this SVD we are going to choose n_components=12, for just latent variables. Through this data dimensions will be reduced like 40017*2442 to 2442*12.

After Applying SVD-> we are done with Dimensionality reduction. Then, we will calculate the Pearson’s R correlation Coefficient for every book pair in our final matrix.

In the end, I will pick any random book name to find the books that have High Correlation Coefficients between (0.9 and 1.0) with it.

STEPS:-

1. Pivot Table -> In this pivot table index are userID and columns are Title

2. Then Transpose it.

3. Then Decomposition through Truncated SVD, the output will be a matrix

4. Pearson’s R correlation coefficient

5. Compare -> To find that which items having high correlation coefficients (between 0.9 and 1.0) with it.

I have done the Books Recommendation system through KNearestNeighbors and through Matrix Recommendation.

The Matrix Factorization techniques are usually more effective because they allow users to discover the latent (hidden) features underlying the interactions between users and items (books).

We use singular value decomposition (SVD) — one of the Matrix Factorization models for identifying latent factors.

Data Preprocessing :-

Similar to kNN, we convert our USA Canada user rating table into a 2D matrix (called a utility matrix here) and fill the missing values with zeros.

We then transpose this utility matrix, so that the bookTitles become rows and userIDs become columns.

After using TruncatedSVD to decompose it, we fit it into the model for dimensionality reduction. This compression happened on the dataframe’s columns since we must preserve the book titles. We choose n_components=12 for just 12 latent variables, and you can see, our data’s dimensions have been reduced significantly from 40017 X 2442 to 2442 X 12.

We calculate the Pearson’s R correlation coefficient for every book pair in our final matrix, To find the books that have high correlation coefficients (between 0.9 and 1.0) with it.

This is our Matrix Recommendation system, we are able to build any of the recommendation system models by using this technique.

Related content

That’s all for this blog