AI tools are designed to feel conversational. You type something, it responds, and the whole exchange has the casual texture of a chat with a colleague. That design is deliberate — it makes the tools more accessible, and it makes the learning curve feel shallower. It also makes it very easy to forget that the conversation is not private.
Most UK professionals using AI tools at work have not been given any guidance on what's appropriate to share with these systems and what isn't. They make reasonable assumptions based on how the tools feel — intuitive, helpful, contained — and those assumptions are often wrong. The consequences range from awkward to genuinely serious, depending on what was shared and what the tool's data policies actually say.
This post covers five categories of information that regularly find their way into AI tools and shouldn't. Not because AI tools are inherently unsafe, but because using them without understanding the data implications is how organisations end up with compliance problems they never saw coming.
1. Personal data about clients, customers, or patients
The most common category, and the most consequential. Professionals across healthcare, legal, financial services, and customer-facing roles frequently use AI tools to help draft communications, summarise case notes, or prepare reports. When they do, they often include identifying information about the person the document concerns — a full name, a reference number, details of a situation that makes someone identifiable even without a name. That information goes into the AI tool's input, which means it's governed by that tool's data policies.
Under UK GDPR, personal data has to be processed lawfully, fairly, and with appropriate safeguards. Sending a client's name and medical history into a consumer AI tool — even briefly, even just to get a draft letter — is personal data processing. Whether it's compliant depends on a set of questions that most users have never thought to ask: What does the tool do with prompts? Does it use them for model training? Is there a data processing agreement in place between the tool provider and your organisation? Is the tool's data infrastructure based in the UK or the European Economic Area?
The answer to those questions varies considerably between tools, between pricing tiers, and between account configurations. A legal professional using the free tier of a consumer AI tool to draft a client letter is in a very different data position than the same professional using an enterprise-tier account with a signed data processing agreement and training data opt-outs enabled. Most users don't know which situation they're in.
The safe working assumption, until you know otherwise, is to keep real client, customer, or patient information out of AI tools and use anonymised or synthetic examples instead. You can tell an AI tool to "draft a letter from a financial advisor to a client who has just missed a payment" without naming the client or your firm. The output will be just as useful, and you haven't shared anything that matters.
2. Commercially sensitive business information
Strategic plans, pricing structures, contract terms, acquisition discussions, internal financial projections — all of these are commercially sensitive, and all of them occasionally appear in AI prompts when someone is trying to get help drafting a document or thinking through a problem. The risk here is partly about data policy and partly about something more specific to AI tools: the training data question.
Most major AI providers now offer enterprise accounts with explicit commitments that prompts will not be used for model training. But this is not the default position for free accounts or many standard paid tiers, and the terms of service language tends to be dense enough that users rarely read it carefully before deciding what to share. If you are using a tool on its standard consumer plan and you type in your company's pricing strategy because you're drafting a sales deck, you've shared that information with the provider under whatever terms they've set out — which may include using it to improve their model.
For founders and directors, this is particularly worth thinking about. The conversations where AI is most tempting as a thinking partner — strategy discussions, competitive analysis, merger and acquisition planning — are often exactly the ones where the information involved is most sensitive. The more specific and proprietary the content, the more careful you need to be about where it goes.
3. Employee performance information and HR data
HR professionals, managers, and business owners regularly use AI tools to help with performance review drafts, disciplinary documentation, and internal communications about staff. The intention is usually to save time on difficult writing tasks, which is reasonable. The problem is that HR data is among the most tightly regulated categories of personal data under UK law, and it involves identifiable individuals who have specific rights over how their information is handled.
Typing a team member's name, role, and details of a performance concern into an AI tool in order to get a draft review or investigation report is processing that person's personal data. They have not consented to it being processed by that tool. Your organisation almost certainly hasn't documented it in your data processing records. And if something goes wrong — a data breach, a Subject Access Request that turns up AI-generated documents, an employment tribunal where the paper trail becomes relevant — the fact that a draft was generated using a third-party AI tool with unclear data retention policies is not a detail you want to be explaining.
Again, the practical solution is usually simple: anonymise before you input. Describe the situation without identifying the individual. You can tell an AI tool "help me draft a written warning for a team member who has had three unexplained absences in two months" without using their name or any identifying detail. The draft will serve its purpose, and you haven't created a compliance issue.
4. Unpublished creative or intellectual work
This one is less about compliance and more about intellectual property. If you are working on something that isn't yet in the public domain — a book manuscript, a research paper under review, a product design document, a proprietary training programme — and you paste it into an AI tool for editing or feedback, you're sharing it under that tool's terms. Depending on those terms, the provider may have broad rights to use the content you submit.
For creative professionals, academics, and anyone working on proprietary materials, this is worth taking seriously. The tool's output might help you improve your draft. But you've submitted the draft to a third party, and what that means for your intellectual property depends on terms most people haven't read. The safer approach is to work with extracts small enough that they don't constitute the work, or to use AI tools for structure and approach rather than for reviewing the specific text you intend to publish or protect.
5. Access credentials, passwords, or authentication details
This one should be obvious, but it comes up often enough to be worth stating plainly. People paste error messages into AI tools for debugging help, and sometimes those error messages contain API keys. People ask AI tools to help them write configuration documentation and include live credentials in the example. People use AI tools to troubleshoot system access issues and describe authentication details in the process.
AI tools are not password managers. They are not designed to handle secrets. Their interfaces are not secured for that purpose, and their data handling is not built around protecting authentication credentials. Credentials that appear in a prompt should be considered exposed, rotated immediately, and not submitted again. This is not a theoretical risk — it's the kind of mistake that security teams deal with in the real world.
How to build good habits before they're forced on you
Most of the problems described here are avoidable with a modest amount of awareness and a few working habits. Anonymise personal information before it goes into a prompt. Check the data policy for any tool your team uses professionally — specifically the sections on training data and data retention. Establish a simple internal rule about what categories of information require more caution. And treat AI tools like any other third-party software: useful, but not unconditionally trusted with everything.
Organisations that build these habits proactively are in a much better position than those who encounter the compliance questions for the first time in response to an incident. The ICO's guidance on AI and data protection is growing more specific, UK employment tribunals are beginning to see AI-related evidence questions, and professional regulators in sectors like law and financial services are publishing clearer expectations around AI use. The direction of travel is towards more accountability, not less.
Getting your team trained properly — not just on how to use AI tools effectively, but on how to use them responsibly — is not a bureaucratic exercise. It's the thing that determines whether AI adoption in your organisation is genuinely an asset or a liability waiting to show itself.
AI for All UK's programme includes dedicated modules on AI ethics, data protection, legal considerations, and cybersecurity — alongside practical skills in prompt engineering, workflow automation, and content creation. The programme is free for eligible UK citizens and ILR holders. Visit aiforalluk.com to check eligibility or speak to the team.
Want a quick summary or follow-up ideas?