- By: Áine Byrne
- Published on:
Share on:
Share on:
Music log stardate 2025.0829: Artificial Intelligence (AI) is everywhere, like that catchy tune you just can’t get out of your head! From cars that drive themselves to apps that know what you want before you do (creepy, yet impressive). But AI isn’t one single technology, it’s a whole band of techniques working together. One of its lead performers? Machine learning (ML). ML is the part of AI that learns from data and makes predictions, and while it might sound complex, it’s more structured rehearsal than futuristic sparkle.
But understanding how it works? That’s a whole different gig. So, let’s break it down. No jargon, no mystery, just a clear, practical intro to ML and how it’s quietly remixing everything from your morning commute to your favourite streaming service recommendations.
At its core, ML is a prediction machine. It takes patterns from past data and uses them to anticipate what’s likely to happen next. It’s not about magic or futuristic tech, it’s about using what we already know to make smart, data-driven decisions. Whether it’s forecasting asset prices, diagnosing medical conditions, or guessing whether that pedestrian is about to cross the road, ML maps what we know to what we don’t, and it does it fast and (mostly) accurately.
The following well-rehearsed setlist of machine learning fundamentals is inspired by the MIT CSAIL ‘AI for Business Strategy’ program I had the chance to attend earlier this year. I’ve adapted them here to make ML feel more like a jam session than a lecture.
What are you trying to solve? Is it recognising faces, diagnosing diseases, or predicting traffic jams? Start with the problem or pain point. ML doesn’t do vague.
ML learns from data the way musicians learn from sheet music. Through supervised learning, it studies pairs of inputs and desired outputs. Just as a musician needs quality sheet music to master a piece, ML models need clean, representative training data to recognise patterns.
No sheet music, no melody → no training data, no learning.
Whatever the input may be, images, molecules, or cat videos, it needs to be translated into a format ML can understand. This is referred to as the ‘representational challenge’. If that sounds too nerdy, don’t worry I try to explain it in the next section with yet another musical twist.
Then it’s time to decide how the system will learn, this is your architecture. The architecture shapes how your model interprets the data and sets the tone. It’s the blueprint that defines how the system processes data and learns from it. It’s like choosing the right setup for your band, whether you need a solo acoustic guitar, a synthesizer, or a full orchestra, it all depends on the kind of performance (In ML terms: the kind of task or problem).
Examples of architectures include Convolutional Neural Networks (CNNs), which are great for image data (think guitar solos), and Transformers (not the robot kind), which handle sequences (think synthesizer layering complex sounds).
It’s not enough to ace the training data. The real test? Performing well on brand-new, unseen quality data.
Before any great gig, there’s a soundcheck. You don’t just assume the mic works or the bass won’t drown out the vocals, you test everything. That’s exactly what validation is in machine learning. To make sure your model isn’t just showing off with an air guitar, you hold back a chunk of quality data as a validation set. This simulates real-world conditions and helps avoid overfitting, it’s your way of making sure the system can handle the real-world gig, and not just the studio session.
In step 3, we mentioned the ‘representational challenge’, how data needs to be translated into something a machine can understand. As promised, here comes the musical analogy. Imagine you are trying to fill out a musical questionnaire describing a song (in ML terms: a Data Point) using a Spreadsheet, or Spready if you’re feeling jazzy. You list out the tempo, key, genre, number of instruments, emotional vibe (yes, that’s a thing). This Spready list of traits or characteristics is your Feature Vector, it’s a structured way to tell the machine what the song is like.
The act of selecting and formatting these traits is known as ‘feature engineering’, it’s how we tackle the representational challenge by crafting meaningful inputs that help the system learn.
You would then feed your jazzy Spready into the system and hope it could tell the difference between jazz and death metal (or even Klingon Opera).
Once your data is represented as Feature Vectors, ML algorithms start looking for patterns and separate one group from another. Think of it like sorting your music library, jazz in one playlist, death metal in another. The system tries to draw the cleanest possible line between categories, so it can make confident predictions without getting thrown off by noise or feedback. Straightforward boundaries often perform better, turns out, simplicity isn’t just elegant, it’s the backstage pass to better generalisation.
While this setlist focuses on supervised learning where the machine learns from labelled examples, it’s not the only style in the ML repertoire.
Some models improvise without labels (in ML terms: unsupervised learning). These models receive unlabelled data and try to find structure on their own, like grouping similar songs without any guidance. No labels, no supervision, just pattern discovery.
Others learn by trial and error, like a musician fine-tuning their instrument until they hit the right note (in ML terms: reinforcement learning). A real-world example would be a game-playing AI like AlphaGo, which learns to win by playing thousands of games and adjusting its strategy based on rewards (wins) and penalties (losses).
And then there are models that teach themselves using clues hidden in the data (in ML terms: self-supervised learning). To use another music analogy, it’s like a soloist composing a melody by anticipating the next note from the harmony already played, learning the tune as they go with no sheet music required. No external labels, but the system creates its own supervision from the data. Real-world examples include predicting the next word in a sentence or filling in missing parts of an image.
Different tasks call for different approaches, but the fundamentals in the setlist still apply: clear goals, good data, and plenty of practice.
Nowadays, we’ve moved on to end-to-end learning. Modern deep learning models don’t need you to fill out the musical questionnaire they just listen to the whole track and figure it out themselves. Deep learning models learn both the features and the mappings in one go, adapting to whatever format the data comes in, be it graphs, images of your favourite pop star, or your favourite music app playlist.
While end-to-end learning simplifies the process for users, deep learning models often require vast amounts of data and significant computational power to train effectively.
Machine learning might seem futuristic at first glance, but behind the curtain, it’s less smoke and mirrors and more scales and rehearsals. It isn’t magic, it’s methodical. Like a great band, ML systems learn the rhythm, tune their instruments (in ML terms: parameters), and practice until they can perform confidently in front of a live audience (in ML terms: real-world data). There’s more to tuning than just parameters, think of it as setting the stage before the band plays. We’ll explore more of that in the next post.
So next time someone drops “machine learning” into a conversation like it’s a mysterious force, you’ll know better. It’s not magic, it’s smart, structured problem-solving. And if you understand the basics, you’re already ahead of the curve.
Catch up on our Machine Learning series below:
Subscribe now to keep reading and get access to the full archive.