Getting serious about AI governance is one thing. Working out what it actually looks like in practice – the rules, the ownership, the decision-making frameworks that people can use day to day – is where most organisations find the going gets harder.
Privacy, risk, and policy are where that responsibility becomes operational, and where intent either gets translated into something workable or quietly dissolves into good intentions and vague guidance.
AI adoption requires more than ‘hoping for the best’. It requires explicit guardrails that reflect strategic intent, not vague aspirations. Teams need to understand what is acceptable, where accountability sits, and how decisions involving AI are meant to be made, challenged, and, when necessary, stopped. Without this clarity, organisations tend to drift into one of two unhelpful extremes: either progress stalls because no one feels confident enough to move forward, or experimentation happens informally, outside of governance, creating unmanaged risk.
At a minimum, organisations need policies and standards that cover:
- Acceptable use, transparency, and accountability – so people know what they can do with AI tools, what they must disclose, and who is responsible for outcomes.
- Risk management across bias, explainability, privacy, and safety – recognising that AI systems can introduce or amplify risk in ways that are not always obvious at the point of use.
- Compliance with data protection, intellectual property, and contractual obligations – including sector-specific requirements where relevant.
In the UK context, government guidance such as the AI Playbook, the Technology Code of Practice, and the Cloud Security Principles all point in the same direction: strong governance, clear ownership, and proportionate controls are not optional extras; they are prerequisites for using AI responsibly in real organisations.
Privacy is often where these issues surface first, but it is rarely the only risk.
GDPR is a significant and well-understood example. Uploading company data containing personal or sensitive information into third-party AI tools can create immediate compliance issues, particularly if that data may be retained or used to train external models. The concern is not just theoretical; it is a common behaviour driven by convenience and a lack of clear guidance. You can find more on the privacy side of things on this blog.
Are organisations outsourcing risk decisions to individuals by default?
Many organisations assume that risk is mitigated because individuals or teams are using paid AI subscriptions, often with assurances that data will not be used to train external models. While this can reduce certain risks, it does not remove them entirely. The reality is rarely so straightforward, and the terms governing data use, retention, and liability are complex, evolving, and not always well understood by the people using the tools day to day.
For most employees, this creates an uncomfortable gap. They are asked to make judgement calls about data usage based on lengthy terms and conditions that few have the time or expertise to interpret, often accepting default settings without fully understanding the implications. In practice, this shifts risk onto individuals who are not equipped to assess it, rather than keeping accountability where it belongs – with the organisation.
This is precisely why clear policy, guidance, and technical controls matter. Organisations cannot rely on subscription tiers or vendor assurances alone. They need to be explicit about which tools can be used, for what purposes, with which types of data, and under what conditions – and they need to design their AI environment so that people are not forced to make these decisions in isolation.
Other risks are just as material.
Intellectual property can be inadvertently exposed when proprietary documents, code, or product plans are fed into public models. Contractual obligations with clients or partners can be breached if data is reused in ways that were never agreed upon. Data residency requirements can be violated if organisations do not understand where data is processed or stored. And in regulated industries, automated or AI-assisted decision-making can raise additional obligations around explainability and auditability.
This is why governance needs to move beyond policy documents and into technical and operational choices. Knowing where data sits, how it moves through systems, and who ultimately has access to it is as important as the rules written on paper. It is also why we design and recommend approaches that keep sensitive data within an organisation’s own technical environment wherever possible, building AI capability in a way that keeps data secure, controlled, and consistent with internal policies, rather than allowing it to leak into external systems by default. Our product Chatz was built on exactly that principle, for organisations that want the benefits of conversational AI without the governance trade-offs that typically come with it.
Strong privacy, risk, and policy frameworks are not about eliminating risk altogether. They are about making risk visible, intentional, and manageable. When teams understand the boundaries that they are working within, confidence increases. When leadership has clarity on ownership and accountability, trust follows. And when governance is built into the operating model, AI adoption becomes something the organisation can deliberately scale, rather than something it constantly needs to rein in.
__
This is part of a series on AI readiness for leadership teams. We’ve already covered privacy trade-offs and data readiness, and in the coming weeks we’ll be publishing more on what readiness looks like across leadership and the strategy of AI adoption. Watch this space.