Share on:

Curator’s Log: How recommender systems shape what we see, buy, and love

A strategic technology that spans the spectrum of data-driven methods, from classical ML to deep learning, recommender systems have evolved from rule-based suggestions to context-aware personalisation.

Recommender systems are the quiet curators of our digital lives. Whether you’re browsing an ecommerce site, (doom) scrolling through social media, or streaming your favourite series, you’re interacting with algorithms designed to predict what you might like next. These systems are everywhere, ubiquitous, yet often invisible.

At their core, recommender systems solve a simple but overwhelming problem: from a vast universe of possibilities, how do we select the few items most likely to capture a user’s attention? With limited screen space (and even more limited human attention), the challenge is not just to recommend, but to recommend well.

“Better recommender systems allow companies to pursue the differentiation strategy by allowing them to create a better match between a customer’s preference and the products, services or content being offered.”

Early Approaches: Collaborative filtering

Before machine learning (ML) or artificial intelligence (AI) took centre stage, one of the most intuitive approaches was collaborative filtering. Imagine trying to predict whether you would enjoy reading a recently published blog on ML. If we know what type of blogs you have liked or rated highly in the past, we can compare the newly published blog to those. If it’s similar, such as, the blog takes a playful yet informative tone on the subject of ML (with just a sprinkling of sci-fi) then we might recommend it.

But similarity isn’t just about one particular trait (or metadata). Collaborative filtering digs deeper by analysing if other subscribers liked both the newly published blog and the ones you liked in the past.

SubscriberMusic Log: Stardate 2025.0829Captain’s Log: Stardate 2025.0916Academic Log: Stardate 2025.0930
YouLikedLikedUnsure
Your ConnectionLikedLikedLiked

If subscribers who liked Music Log and Captain’s Log (which you also both loved right?!) also liked Academic Log (which you haven’t read or rated yet), we infer a positive correlation.

This method, known as item-item collaborative filtering, was a staple of early recommender systems, and it still holds value today.

Collaborative Filtering Types Summary

TypeMethodExample
Item-Item (Item Based)Find similar items you liked in the pastIf you liked blog 1 you may also like the recently published blog 2
User-User (User/Subscriber Based)Find similar users/subscribers who like the same things as you likeSubscribers who loved the recently published blog 2 also liked blog 1, therefore you may also like the recently published blog 2 (There’s a tongue-twister is this somewhere…)

Limitations of Traditional Methods

Despite its cleverness, item-item collaborative filtering has limitations. It relies heavily on overlap, users who have rated both items. If that overlap is sparse, the similarity calculation becomes unreliable. Moreover, it often ignores rich metadata: genre, user preferences, and especially unstructured data like product descriptions or reviews. This is where more modern machine learning begins to shine.

  1. Cold Start Problem: New items won’t get recommended because they don’t have enough interactions yet.
  2. Data Sparsity: If there are millions of items, most users will only interact with a small number of them. This makes it harder to find strong item relationships for recommendations.
  3. Popularity Bias: Frequently bought or read items dominate recommendations. Less popular or niche items are ignored even if they might be a good match (Ah ha! This may well explain my LinkedIn reaction and impression statistics!)
  4. Scalability Issues: Comparing every item to every other item can be computationally expensive as data grows. Companies need efficient algorithms to handle large datasets.

There Are Workarounds

  1. Combine Collaborative Filtering with Content-Based Filtering by using information about the item (or the blogs) themselves. For example, if a newly published blog is similar in subject to a popular blog, it can be recommended even before it has many views.
  2. A technique called Matrix Factorisation used to uncover hidden patterns in data. This technique is typically used in explicit feedback systems (e.g., ratings), while implicit feedback (clicks, views) often requires additional modelling.

ML’s Matrix Factorisation Explained

Matrix factorisation is a technique used to learn latent factors in data, especially useful in recommendation systems like Netflix or Spotify. To do this, it works by breaking a large table (called a matrix) into two smaller ones to help predict missing information.

Imagine we have the following rating data:

Blog A (Captain’s Log)Blog B (Academic Log)Blog C (Strategist Log)
Reader 15?2
Reader 212?

Rows = Readers | Columns = Blogs | Values = Ratings (1 to 5, where “?” are missing ratings we want to predict)

Matrix factorisation helps fill in those blanks by splitting the table into two matrices:

Matrix 1: Reader Features Matrix: Types of blogs each Reader tends to like.
Matrix 2: Item (or in this case Blog) Features Matrix: Types of Readers tend to like each blog.

Let’s break the process down, I’ve tried to do this in the simplest of terms so if you’re a techy reader then please forgive me for not being completely accurate:

1

Let’s assume that two hidden (or latent) features exist that explain ratings:
Feature 1: How technical is this blog?
Feature 2: How beginner-friendly is this blog?

2

Let’s create the two smaller matrices:

Reader Matrix: How much does each Reader prefer each hidden feature?
Reader 1 might score high on “technical preference”
Reader 2 might score high on “beginner-friendly preference”

Item (or Blog) Matrix: How much does each blog have of each hidden feature?
Blog A = highly technical
Blog B = highly technical
Blog C = beginner-friendly

3

The algorithm learns through iteration:

  • Starts with random guesses for these hidden values
  • Multiplies the matrices to predict ratings
  • Compares predictions to known ratings (5,2, 1, 2)
  • Adjusts the hidden values to reduce errors
  • Repeats until predictions are accurate

4

Once trained, the algorithm multiplies the user and blog values to predict the “?” ratings.

Common algorithms for Matrix Factorisation include Singular Value Decomposition (SVD), Non-negative Matrix Factorisation (NMF), Alternating Least Squares (ALS) and Gradient Descent. Let’s not get into how these algorithms work as I found it hard myself to get my head around it all (sorry techies), but in the simplest of terms, these types of algorithms learn “fingerprints” for each Reader and Blog, then uses these fingerprints to fill in the blanks intelligently.

So, if Reader 1 loves technical blogs (like Blog A, which they rated 5) and Blog B is also technical, the model predicts Reader 1 would rate Blog B highly too, even though we never saw that rating!

If Reader 2 is a beginner in the subject and therefore rated both technical Blogs A and B as low, the model predicts it will love Blog C and rate it high for its beginner content.

Rating Predictions:

Blog A (Techy)Blog B (Techy)Blog C (Beginner)
User 1 (Techy)54.82.1
User 2 (Beginner)1.224.9

Note: ratings are hypothetical examples after training, not manually calculated.

Every Reader and Blog has an intrinsic rating tendency. The final rating is a combination of the Reader’s average rating, the Blog’s average rating, and most importantly, the interaction between the Reader and the Blog.

Why It Matters

Recommender systems are more than just convenience tools. They shape our digital experiences, influence our choices, and reflect our preferences (sometimes scarily better than we know ourselves). As ML continues to evolve, so too will the sophistication of these systems, blending structured and unstructured data to deliver ever more personalised recommendations.

Final Thoughts

From simple “you might like this” lists to intelligent guides shaping our digital journeys, recommender systems have come a long way. With rapid advances in ML or AI, and personalisation, the future looks even more adaptive, delivering context-rich recommendations that drive engagement and loyalty. Tomorrow’s systems won’t just recommend, they’ll understand, anticipate, and curate experiences that feel almost intuitive. As these systems become more intuitive and influential, it’s worth remembering that even the smartest recommendations can carry biases, blind spots, or unintended consequences, so staying curious and critical is part of the journey too.

Let's work together

We would love to speak with you.
Feel free to reach out using the below details.

Scroll to Top

Discover more from Business Strategy & Marketing Consultancy

Subscribe now to keep reading and get access to the full archive.

Continue reading