The firm in this post is a mid-size professional services business in Delhi NCR. Forty-two employees. A mix of chartered accountants, tax consultants, and support staff. They came to us because a client had asked them a question they couldn’t answer: “What AI tools does your team use, and how do you manage client data in them?”
That question started a two-week engagement. What follows is an anonymised account of what we found.
What the tool inventory showed
Before the audit, the firm’s leadership believed AI tool usage was minimal. “Maybe a few people use it occasionally,” was how the managing partner described it.
The actual picture was different. After structured interviews with team leads and a review of browser activity logs (with consent), we identified eleven distinct AI tools in active use across the firm:
- ChatGPT (free tier) — used by nine people for drafting client communications, summarising financial documents, and explaining complex provisions in plain language
- Gemini — used primarily by two senior associates for research and slide preparation
- Grammarly — embedded in four team members’ browsers; not typically thought of as an “AI tool,” but it processes all text it’s shown
- Notion AI — active in the firm’s Notion workspace, which contained client project notes
- WhatsApp AI features — newer, less understood, but present on work phones
Six of these tools were being used without any formal approval or documentation. Three had been added to shared workspaces where multiple people had access.
The data exposure map
Once we had the tool list, the next step was mapping what data was going into each one.
The most significant finding: client financial statements were being pasted into ChatGPT free accounts on a regular basis. This was happening for legitimate reasons — the consultants were generating commentary drafts and checking calculations. But on a free account, OpenAI’s data policy at the time allowed for conversations to be used to improve their models unless the user had opted out. Most had not.
A second finding: the Notion workspace had been connected to Notion AI. This workspace contained meeting notes from client strategy sessions, some of which included unpublished financial projections and M&A discussions. None of the people involved had explicitly consented to this data being processed by an AI model.
The third finding was a different category of risk. Two team members had used their personal Gmail accounts to access Google’s AI tools, rather than the firm’s Google Workspace account. This meant data was being processed under personal accounts with no enterprise data protection agreement in place.
None of this was careless or reckless. It was people trying to work efficiently, with tools that made their jobs easier, in the absence of any guidance. That context matters. The solution is not to blame individuals.
The credential and access review
Alongside the AI tool audit, we ran a credential review. Three specific issues came up.
First: eight people were using passwords that appeared in public breach databases. We identified these using haveibeenpwned’s API against work email addresses. These are credentials that adversaries can purchase and test against your systems.
Second: the firm’s core practice management software had no MFA requirement. Three former employees still had active accounts from the previous year.
Third: the shared email password for the firm’s main client-facing address had not been changed in two and a half years. Four current staff and at least two former staff knew this password.
None of these are exotic vulnerabilities. They’re the kind of thing that appears in the vast majority of SME audits we run.
The 90-day plan
We delivered the StackGuard Report ten days after kickoff. The prioritised action list had three tiers.
Immediate (week one): Revoke the two former employee accounts. Enable MFA on the practice management software. Change the shared email password and restrict it to two named individuals.
Within 30 days: Move any staff who regularly handle client financial data to ChatGPT Team or a comparable enterprise account with a data processing agreement. Audit and archive the Notion workspace to remove historical client data. Brief all staff on a short-form AI usage policy (one page, specific examples, no jargon).
Within 90 days: Run a phishing simulation to test staff response to AI-generated attacks. Set up a quarterly review process for any new AI tools before they’re adopted. Evaluate whether the firm’s engagement letters need to address AI usage.
The managing partner’s comment at the debrief: “We knew we were probably behind on some of this. I didn’t know it was this specific.”
What this means for firms like yours
The situation this firm was in is not unusual. It is, in fact, close to the median for professional services businesses of this size in India. Most have some AI tool usage happening, most have credential hygiene issues, and most have not had a structured conversation at leadership level about where the boundaries are.
The value of an audit is not the report. It’s the specificity. Knowing exactly which tools, exactly which data, exactly which accounts to fix. That’s what makes the 90-day plan executable rather than aspirational.
If you’re unsure what a review like this would find in your firm, that uncertainty is usually reason enough to find out. A StackGuard audit follows this exact process — tool inventory, data mapping, credential review, written report, and a debrief with your leadership team.