Should I use an LLM?
With the recent advancements in natural language processing (NLP), exemplified by the development and deployment of large language models (LLMs) such as ChatGPT and Gemini, businesses and organisations are quite rightly asking: Should we be using them?
There are benefits, particularly in terms of efficiency, cost reduction, and creativity.
However, at the moment, it’s unclear whether there’s real knowledge about the concerns that should be considered. These include the cost of proper implementation, privacy, security, ethical considerations, alignment with societal values, and appreciating their non-deterministic nature, as well as the lack of guaranteed accuracy.
Are LLMs the right tool for my organisation?
These challenges lead to a fundamental question that all organisations must address during their LLM adoption journey: “Are LLMs the right tool?” In other words, do LLMs align with an organisation’s specific ecosystem? There are other options, such as traditional machine learning, process automation and software redesign.
While there is no definitive or universally correct answer to what the right tool is, we list below some key questions an organisation should ask to make informed decisions about when to use LLMs.
The non-deterministic nature of LLMs
A non-deterministic system can produce different outputs given the same input. This variability can be beneficial for tasks requiring creativity, such as composing music. However, it poses problems in settings that demand consistency and reliability. For instance, consider an autonomous vehicle making a critical decision, such as whether to cross an intersection. Such choices require reliance on a system that, given a specific input, will always produce the same response.
The non-deterministic nature of LLMs also presents challenges for tasks like threat detection. As a general rule, safety-critical systems and those requiring adherence to specific rules or protocols should avoid relying on LLMs.

Non-guaranteed accuracy
LLMs can produce fabricated, factually incorrect, or misleading content, and these outputs can have serious consequences depending on the domain.
The issue of non-guaranteed accuracy is particularly problematic for generative AI systems, where an LLM chains multiple responses together to perform a single task. A single incorrect response can render the entire task’s output erroneous.
For example, an agentic system tasked with predicting drug safety might need to analyse several safety and test reports. Errors in assessing individual reports can potentially affect the agent’s final decision.
Ethics and adherence to societal values
LLMs are trained on vast amounts of data, much of which has not been vetted for specific applications. Consequently, the output or generated content from LLMs may not always reflect an organisation’s values.
Therefore, using LLMs in scenarios that require ethical sensitivity may necessitate careful consideration, along with the implementation of mitigating factors, to help maintain consistency with the organisation’s values.
Privacy and Security
Data privacy and security face significant challenges within the context of LLMs. Since LLM systems are not inherently designed with privacy and security in mind, they may expose sensitive data or corporate secrets without appropriate mitigating measures.
Additionally, LLMs are susceptible to various attacks, including context poisoning, prompt injection attacks, and sensitive information disclosure.
Costs of proper implementation
Building an Agentic, LLM-driven system requires proper development and training. It isn’t simply a prompt engineering exercise with no guardrails.
Our experience tells us that significant work is needed to:
- Make data manageable for an LLM to consider
- Get the LLM to think deeply enough
- Ensure it only talks on topic
- Make the LLM talk in a way that meets your company’s values,
- Make sure it has enough QA to present real facts.
If you add to this the resource needs for infrastructure to run the LLMs, you may find that other options are better.
Conclusion
The list of questions to consider is not exhaustive, but hopefully it makes you think. If you would like to take a closer look and fully understand your options, please visit our AI and Machine Learning Consultancy page or get in touch with us here.