SLM series -  Syndicode: How to use the right (model) tool for the job

This is a guest post for the Computer Weekly Developer Network by Volodymyr Murzak in his role as solution architect and tech lead at software development company Syndicode.

Syndicode is a software development company that provides services including web development, mobile app development, SaaS development and consulting, focusing on building bespoke digital products for clients by managing the entire development process from concept to deployment and ongoing maintenance. 

The company has expertise in business analysis, UX/UI design and quality assurance testing.

Murzak writes in full as follows…

On the question of whether SLMs and LLMs should be put to work together? Well, the short answer is it depends. If you’re dealing with tasks that need deep contextual understanding and broad general knowledge, an LLM will probably be your best bet. But if you need something lean, efficient and specific, then an SLM is your guy. 

The real magic happens when you make them work in tandem. LLMs can handle the broad strokes, while SLMs handle the precision work. The trick is not throwing everything at a giant model just because you can — efficiency matters.

Intelligent routing 

This is where intelligent routing steps in. 

AI needs to be smart about where it sends queries. If a task requires deep domain expertise, it makes no sense to hit a general-purpose LLM when a lightweight specialised model could do the job faster and with better accuracy. It’s like asking a cardiologist about a broken arm – it’s the wrong tool for the job. 

Plus anyway, let’s be honest, performance matters. Nobody wants to wait for bloated models to chew through data when a streamlined one could spit out the answer in milliseconds.

SLMs have a clear advantage when speed is a priority. They’re faster to train, easier to update and much cheaper to run. That’s a big deal in AI engineering because time isn’t just money; it’s the difference between staying ahead and playing catch-up. 

Keep SLMs close (to your chest)

When it comes to deployment, most businesses prefer to keep SLMs close, often on-premises or in private clouds. The reason is simple control. If you’re dealing with sensitive data, you don’t want it floating around on public infrastructure.

There’s also the environmental angle. Training a massive LLM isn’t exactly energy efficient. SLMs have a smaller footprint, meaning less infrastructure strain and lower costs. But let’s not pretend they’re perfect. Smaller models can struggle with general knowledge and bias is still an issue. If you train a model on insufficient data, it doesn’t matter how small or large it is. You’ll get garbage results.

Volodymyr Murzak, solution architect & tech lead at Syndicode.

As for whether domain-specific LLMs are better than SLMs, it’s a toss-up. A fine-tuned LLM can be mighty, but if the task is narrow enough, an SLM will do the job more efficiently. Again, it’s about using the right tool for the right job. 

Regarding jobs, SLMs shine in applications where speed and accuracy are the focus domains. Think customer support chatbots, medical data processing and finance analytics. Anywhere that requires specialised knowledge but doesn’t need the full weight of an LLM.

Live & die by data processing

So, are finance and retail key areas? Absolutely. These industries live and die by fast, accurate data processing. Fraud detection, sentiment analysis and real-time customer insights are places where SLMs work well because they need to be quick, precise and cost-effective. 

And finally… yes, healthcare is another big one. 

Analysing physicians’ notes, medical imaging and diagnostics are all tasks that benefit from specialised, efficient models.