AI adoption is accelerating. Thoughtful AI adoption is lagging behind.
Artificial Intelligence (AI) continues to infiltrate various aspects of daily life.
While some are sceptical of its potential, others believe AI can provide faster and more effective approaches to addressing a wide range of problems, from policing and crime prevention to personalised healthcare and streamlining the judicial system. This belief is likely motivated by the success of AI in industries such as retail and manufacturing.
It is, therefore, not surprising that many organisations, including public institutions, are trying to adopt AI and related technologies in some way or form.
Unfortunately, adopting a technology with as much potential as AI generally requires far more input than the technology itself. In other words, recruiting data-related professionals and experts to develop AI systems is unlikely to be sufficient to meet all objectives.
The internet taught us a lesson we’re at risk of forgetting.
We observed similar trends with the advent of the Internet.
As the use of the Internet has increased throughout the 21st Century, users have faced abusive practices such as unwanted commercial emails (spam), identity theft, and, more recently, user tracking.
Experts have responded to some of these challenges by providing technical solutions to mitigate them, while regulators and governments have stepped in to prohibit certain practices and, in some cases, now require organisations to inform their users of their practices in advance.
Importantly, we all expect organisations that we interact with online to mitigate our exposure to abusive practices. This means that from an organisation’s perspective, simply creating an online presence is probably not the biggest challenge.
The real complexity lies in understanding the following questions:
- Are all potential challenges addressed adequately?
- What is the potential return on investment in building an online presence?
- What is the impact of the online presence on the organisation’s ability to create value in the short, medium and long term?
In conclusion, organisations need to manage the risks around their online presence.
The question isn’t whether to adopt AI. It’s how to do it without losing what matters.
Investing in AI is not so different: Successfully developing AI systems that meet the constantly changing needs of today’s society requires a holistic, methodical, and adaptive approach designed to assess and manage the risks around AI. Such an approach should ensure that AI systems developed by an organisation are aligned with its business practices and meet certain social standards.

The importance of this is corroborated by the growing prominence of principles such as privacy, fairness, and social equality in discussions about AI. For an organisation, failing to meet those standards can give rise to lost opportunities. Worse yet, it may even lead to an organisation’s demise, as the example of Cambridge Analytica demonstrates.
What is SAIF – the Sustainable AI Framework?
Our sustainable AI framework (SAIF) is designed to help decision-makers, such as policymakers, boards, C-suites, managers, and data scientists, create AI systems that meet business and social principles.
By focusing on four pillars related to the socio-economic and political impact of AI, SAIF creates an environment through which an organisation learns to understand its risk and exposure to any undesired consequences of AI, and the impact of AI on its ability to create value in the short, medium, and long term.
How our four-step SAIF process works
Step 1: Understanding your organisation’s AI objectives and values
Before anything else, we need to understand what your organisation is actually trying to achieve, and what it stands for. This isn’t a cursory values exercise. It’s the foundation on which everything else is built, because the way AI will be used to support your mission and vision depends entirely on having that foundation clearly defined.
Getting this right early pays dividends throughout the process. It means that when complexity arises, and with AI integration, it always doesm you have an agreed set of principles to navigate by, rather than making ad hoc decisions that gradually pull the implementation away from what you originally intended.
Step 2: Translating your values into measurable performance indicators
Values stated on a website are not the same as values embedded in a system. Once your organisation’s values and AI objectives are clearly defined, the next step is to translate them into what we call “ethical or soft” performance indicators: measurable benchmarks that enable concrete evaluation of how well an AI implementation aligns with your organisation’s standards.
This step is where the abstract becomes operational. It’s the mechanism that ensures the output of any AI implementation reflects the right values for your business — not just in principle, but in practice and at scale.
Step 3: Assessing your operations, data assets, and activities against those indicators
With performance indicators established, we conduct a thorough assessment of your organisation’s operations, data assets, and activities — evaluating each against the benchmarks defined in step two. Every AI application is examined through the lens of the ethical considerations that matter most: fairness, transparency, accountability, and privacy.
In practical terms, this is a rigorous data vetting exercise. Its purpose is to surface and remove bias before it becomes embedded in your AI systems, and to ensure that what you deploy is fair to your customers, your employees, and anyone else it touches. This is the due diligence step most AI projects skip, and it’s often the reason they run into trouble later.
Step 4: Report, findings, and recommendations
Knowledge is only useful if it’s clear and actionable. The process concludes with a detailed findings and recommendations report – a document that identifies any ethical gaps in your current or proposed AI implementation, explains their implications, and sets out the specific steps needed to address them.
This gives your decision-makers, whether that’s your board, your technology leadership, or your operational teams, the information they need to make well-informed choices about AI deployment, with confidence that those choices are grounded in a rigorous and honest assessment of where you stand.
Want to build an AI strategy that reflects how you actually want to do business?
If you’re exploring AI adoption and want to approach it in a way that’s aligned with your values, your governance requirements, and your long-term commercial interests, get in touch. We’d be happy to talk through where you are and what a structured approach might look like for your organisation.