Why SLMs and Knowledge Graphs Will Power the Next Generation of Enterprise AI
By Stephen Breen, ZeroMission
Hardly a week goes by without a headline about the latest breakthrough in Large Language Models (LLMs). They’re getting smarter, cheaper, and easier to deploy but for many real-world applications, they may not be the best fit.
At ZeroMission, we’re deeply focused on how AI can solve practical challenges in transportation, infrastructure, and operations. And in our experience, Small Language Models (SLMs), especially when combined with structured knowledge graphs, are quickly becoming the real engines of business-ready intelligence.
The Case for Small, Smart, and Specific
LLMs are powerful generalists. They excel at open-ended language tasks, broad summarization, and reasoning across unstructured data. But when AI needs to be truly useful in a business context, say, optimising fleet maintenance schedules or analysing EV charging behaviour, focus matters.
That’s where SLMs shine.
Instead of throwing a massive general-purpose model at every problem, smarter architectures are breaking tasks down. A top-performing reasoning engine, like DeepSeek R1, uses a “mixture of experts” approach. While it technically has 671 billion parameters, only a fraction (about 37 billion) activate per query. This selective use is more efficient and safer for tasks that require precise subroutines.
By deploying smaller, more targeted models, businesses can achieve faster, more accurate, and more cost-effective outcomes. For example, one SLM might focus on operations data, another on compliance regulations, and a third on energy usage, all feeding into a central decision-making system.
This federated approach mirrors how human reasoning works. Our brains don’t light up every region for every thought, we use specific circuits for memory, motor skills, or problem-solving. AI systems should work the same way.
SLMs in Action
Try asking a generalist LLM, like ChatGPT, about your current EV infrastructure or how many megawatts you’re pulling from the depot on a peak day, and you’ll likely get a guess. LLMs are notoriously shaky on real-time or numerical data.
An SLM trained to query databases, interpret IoT sensor feeds, or run live optimisation models can provide far more reliable results. And when paired with an LLM to explain those results in natural language, the combination becomes even more powerful.
Take Microsoft’s Phi-2 as a case in point. Despite being much smaller than GPT-4, it outperforms larger models in niche areas like maths and code, simply because it was trained on high-quality, domain-specific data.
Until we hit true AGI (artificial general intelligence), no single model will be great at everything. But an SLM trained for your specific domain, whether that’s transport logistics or depot energy balancing, can deliver peak performance today.
The Real Magic: Knowledge Graphs + GraphRAG
Here’s the catch: even the best SLM won’t help if it’s working off stale or incomplete data. Transformers (the architecture behind most LLMs and SLMs) don’t natively update themselves in real-time.
That’s where knowledge graphs come in.
A well-structured, continually updated knowledge graph serves as a live foundation for your AI. It contextualises, verifies, and grounds model outputs in real-world data, not hallucinated guesses.
When paired with Retrieval-Augmented Generation (RAG), and especially GraphRAG, this setup becomes transformative. It allows AI systems to dynamically retrieve the most relevant structured and unstructured data at runtime, improving not just factual accuracy but also reasoning quality.
For enterprise AI, this means sharper insights, better ROI, and more actionable outputs, without the GPU bill of running enormous LLMs.
Why It Matters for Transport and Infrastructure
At ZeroMission, we’re already exploring hybrid SLM + GraphRAG models across several areas:
Fleet maintenance: Using AI to predict part failures based on usage, telemetry, and historical patterns, not just general data but the specifics of your vehicles and routes.
Depot electrification: Analysing real-time load, grid interactions, and pricing to optimise charging schedules.
Regulatory compliance: Parsing local and EU regulations into machine-readable logic that guides operations, with updates reflected in the knowledge graph as rules change.
The goal isn’t to replace humans or centralise every insight, it’s to build tools that support better decisions with contextual, domain-specific intelligence.
A New Direction for GenAI
The next evolution of enterprise AI isn’t about chasing bigger models. It’s about smarter design.
A hybrid architecture, SLMs for precision, LLMs for explanation, knowledge graphs for truth, and GraphRAG for dynamic context, is the way forward. It’s more affordable, more scalable, and far more aligned with the complexity of real-world business environments.
At ZeroMission, we believe this shift will unlock a new wave of innovation in clean transport, smart infrastructure, and digital operations. And we’re building the tools to get us there.
Sources worth reading:
TechRadar: https://www.techradar.com/pro/how-slms-and-knowledge-graphs-supercharge-ai
STL Digital: https://www.stldigital.tech/blog/enterprise-ai-optimization-tackling-llm-hurdles-and-embracing-slm-growth/