Ask ten senior partners at professional services firms whether their staff use AI tools. Most will say they’re not sure, or that they have a policy against it.
Then ask the junior associates. The analysts. The paralegals.
The answer is different.
What’s actually happening
AI assistants have become a quiet layer in how work gets done. A tax consultant summarises a client’s financial statements in ChatGPT before writing a memo. A paralegal pastes contract clauses into Claude to check for ambiguities. An analyst uses Gemini to clean up a pitch deck.
None of this is malicious. It’s people doing their jobs more efficiently. The problem is what gets sent along for the ride.
When a CA pastes a client’s P&L into an AI tool to generate commentary, that financial data is transmitted to a third-party server. When a lawyer summarises case notes in ChatGPT, privileged information leaves the firm’s environment. When an HR consultant uses an AI assistant to draft a performance review, personal employee data is processed outside any boundary you control.
Most firms have no visibility into this. They have no log of what tools are being used, what data is going in, or what the provider’s data retention policy is.
The retention question
The major AI providers handle data differently, and the policies change. Some use user-submitted conversations to train future models unless you opt out. Some retain data for 30 days, others longer. Enterprise tiers usually offer stronger protections, but most firms are using free or basic accounts, not enterprise ones.
Your client data is likely protected by confidentiality obligations. In some cases, by regulation. The question worth asking is: does your current practice hold up if a client asks where their financial data has been?
For CA firms, this touches the Institute’s confidentiality requirements. For law firms, it touches privilege. For HR consultants and management advisors, it touches the Digital Personal Data Protection Act, which is now live.
None of this means you should ban AI tools. That approach doesn’t work and creates a different problem: staff use them anyway, but now they don’t tell you.
What a realistic response looks like
The firms handling this well are not the ones with the strictest AI bans. They’re the ones that have taken a clear-eyed look at what’s actually happening, decided which uses are acceptable, and put boundaries around the sensitive ones.
That usually means: an inventory of which tools are in use, a decision on which data categories should never go into external AI systems, and a short briefing for staff. Not a 40-page policy document — a conversation with clear examples.
It also means checking whether the AI tools you’d like staff to use have business accounts with proper data agreements. Several major providers offer this at a reasonable cost, and the protection difference is significant.
One more thing worth knowing
The risk here isn’t just regulatory. Clients are starting to ask. In the UK and US markets, enterprise clients already include AI usage in their vendor due diligence. Indian enterprise clients are not far behind. A firm that can answer “yes, we’ve reviewed our AI practices and here’s what we’ve put in place” will stand out from one that can’t answer the question at all.
The underlying issue isn’t whether your people use AI tools. Of course they do. The issue is whether your firm has thought about it at a leadership level, or whether it’s just happening, unmanaged, one paste at a time.
A StackGuard audit maps exactly this: which AI tools are in use across your firm, where sensitive data is being processed, and what a practical policy looks like. It takes five to ten business days and produces a written report written for partners, not IT managers.