- By: Áine Byrne
- Published on:
Share on:
Share on:
In the first part of Building Scalable AI: The Role of Foundation Models, we uncovered foundation models, a powerful and scalable class of Machine Learning (ML). The fact that they are not just a research tool anymore but have the real capacity to deliver strategic advice to help your company progress, grow and succeed. But having gone through the theory, now let’s look at how ML is leveraging the power of foundation models to help companies succeed.
Today, ML is being applied across businesses and industries, and its impact can be grouped into three core applications:
For a better visual representation of the three areas, here are some individual use cases along with where they are typically deployed.
| Application Area | Description | Example Use Cases | Business Impact Examples |
|---|---|---|---|
| Perception & Recognition | Interpreting sensory data | Facial recognition, autonomous vehicles | Retail: In-store analytics using video feeds to optimise layout and customer flow. Healthcare: Radiology image analysis for early disease detection. Manufacturing: Visual inspection systems for quality control. |
| Forecasting & Decision Support | Predicting outcomes | Fraud detection, medical diagnosis | Finance: Credit risk modelling and real-time fraud alerts. Supply Chain: Demand forecasting and logistics optimisation. Energy: Predictive maintenance for grid infrastructure. |
| Content Creation & Automation | Generating new content | Marketing copy, image generation | Legal: Drafting contracts and summarising case law. Education: Generating personalised learning materials. Customer Service: Automating responses and building conversational agents. |
These strategic applications of ML, many of which are increasingly powered or enhanced by foundation models, are reshaping industries. Whether it’s generating campaign copy, automating report writing, creating visual assets, or building interactive learning materials, ML-powered content creation is opening up transformative opportunities for innovation and growth. It’s unlocking new efficiencies and creative possibilities across industries.
Understanding how ML models work is essential, not just for developers, but for anyone considering AI adoption within their organisation. And that will include you. Building intuition about when and how ML is appropriate ensures strategic use across departments and functions.
Take supply chain optimisation for example. With historical delivery data, a model can predict delays or demand spikes. But its success depends on the quality, completeness and relevance (remember feature engineering’s catch in our previous blog?) of the input data. Just like a human expert relies on context and experience, a ML model needs sufficient context to make accurate and meaningful predictions.
It’s also important to recognise that foundation models, while powerful and adaptable, are not always the best fit for every ML task. Simpler models or domain-specific approaches may be more efficient and cost-effective depending on the problem. Strategic leaders should evaluate the trade-offs before defaulting to foundation models.
These thoughtful principles apply across industries. Whether you’re forecasting demand, automating document review, or enhancing user experiences, thoughtful ML application ensures it delivers value where it matters most.
As business leaders explore ML and AI’s potential, here are some key principles to keep in mind:
1 Foundation models can be strategic engines: They scale intelligence across tasks, enabling smarter automation, deeper insights, and faster innovation.
2 Fine-tuning (especially RLHF*) aligns models with business goals, ensuring outputs reflect brand tone, values, and context.
3 Three high-impact ML applications are transforming industries: Perception & Recognition, Forecasting & Decision Support, and Content Creation & Automation.
4 Thoughtful adoption matters: Success depends on data quality, context, and strategic alignment with business objectives.
5 Three high-impact ML applications are transforming industries: Perception & Recognition, Forecasting & Decision Support, and Content Creation & Automation.
* While RLHF offers strong alignment benefits, it’s important to understand its practical limitations before committing resources.
While RLHF (Reinforcement Learning from Human Feedback) has proven effective in aligning model outputs with human preferences, it’s not without trade-offs. The process is resource intensive, requiring substantial human annotation and computational power, which can limit scalability for smaller teams or rapid iteration cycles. In many cases, RLHF also involves iterative feedback loops to refine outputs, adding time and complexity to the customisation process. Strategists should weigh these costs against the benefits of precision and control, especially when considering deployment at scale.
Let’s take a closer look at the trade-offs of both RLHF and Prompt Engineering in terms of the benefits and limitations they bring to. your project:
| Benefits | Limitations |
|---|---|
| Aligns model outputs with human values | Resource-intensive (human annotation + compute) |
| Improves reliability and reduces bias | Scalability challenges for smaller teams |
| Enables fine-grained control over outputs | Requires iterative feedback loops, adding complexity |
| Enhances trust and compliance | Longer timelines compared to lightweight methods |
| Benefits | Limitations |
|---|---|
| Fast and cost-effective: No retraining required. Works directly with existing models. | Limited control: Cannot deeply alter model behaviour or domain-specific knowledge. |
| Low technical barrier: Requires creativity and domain understanding rather than heavy compute. | Fragile performance: Small changes in phrasing can lead to inconsistent outputs. |
| Ideal for experimentation: Quick iterations for prototypes or low-stakes tasks. | Scalability challenges: Hard to maintain prompt quality across large teams or complex workflows. |
| Compatible with API-based models: Works well with commercial foundation models. | No guarantee of alignment: Ethical or compliance constraints may not be fully enforced. |
A key takeaway here is recognising RLHF is powerful but not universally practical. Choose it when precision outweighs speed and cost.
For lighter customisation, one option is prompt engineering. Prompt engineering is crafting inputs to guide model behaviour without retraining. This approach is most effective when working with API-based or general-purpose foundation models, where direct fine-tuning isn’t practical. While this approach offers speed and cost efficiency, it provides less granular control compared to RLHF.
Foundation models are increasingly available off the shelf, from open-source platforms like Hugging Face to commercial APIs like OpenAI and Google Vertex AI. Depending on your resources and goals, you can fine-tune these models yourself using your own data, or leverage pre-aligned models for faster deployment. While open-source models offer flexibility, commercial APIs provide scalability and support. Strategic leaders should weigh cost, control, and compliance when choosing their approach.
Here are some common ways businesses and developers access foundation models:
You can download and fine-tune models like:
These are free to use, and you can fine-tune them locally or in the cloud.
These come with fine-tuning support and include platforms like:
These offer managed fine-tuning, where you upload training data and they handle the rest. You don’t get full model weights, but you can customise behaviour.
Some vendors offer licensed access to proprietary models for on-premise or private cloud deployment, often with support for fine-tuning. This is more common in regulated industries.
When deciding which route to take, it’s important to consider how you want to implement it. For example, ask yourself if you wish to fine-tune them in-house. The answer is different for each as outlined below:
Open-source: You can fine-tune directly using frameworks like PyTorch, TensorFlow, or Hugging Face.
Commercial APIs: You fine-tune via prompts or managed fine-tuning (e.g., OpenAI’s fine-tuning endpoint).
Enterprise licencing: Minimal fine-tuning. Think of it as purchasing a software that you customise to your company’s requirements.
When it comes to the enterprise licencing option, it’s important to watch for:
Once you’ve decided which route to take, it’s important to take a step back and ensure you’ve considered the fuller picture. Because deploying AI the right way can result in huge wins for your company, getting it wrong can have devastating effects.
To help you through this process we’ve created the following checklist for you. Split into 5 areas it will help you understand your operational readiness for it’s deployment. And if you’re not, where you need to focus.
ML is not just a technical tool, it’s a strategic asset. When applied thoughtfully, it can streamline operations, enhance decision-making, and unlock new capabilities that differentiate your business.
As AI continues to evolve, foundation models will play a role in shaping competitive advantage, driving innovation, and enabling new capabilities. If you don’t give the implementation the time it deserves at best you won’t feel the benefits, at worst it could set you back. But embrace it correctly and it has the potential to supercharge your business.
Just one last thought I would love for you to takeaway, if nothing else from this blog: In the world of AI, we must be mindful that it’s not the algorithms that choose the path. The leaders, and readers, must decide whether to take the blue pill of efficiency alone, or the red pill of ethical and responsible human-centred impact.
Catch up on our Machine Learning series below:
We would love to speak with you.
Feel free to reach out using the below details.
Subscribe now to keep reading and get access to the full archive.