A private equity associate uses an AI agent to screen a new deal. The agent pulls relationship data, cross-references prior engagements, flags prior investment activity, and misses a conflict. The miss isn't a data problem. It's an access control problem, and very few firms building AI workflows in professional services are thinking about it correctly.
That governance gap drives almost every AI compliance failure in regulated professional services right now, and it's what Intapp built Celeste to close.
On February 25, 2026, Intapp (NASDAQ: INTA) announced the launch of Intapp Celeste, an agentic AI platform for professional firms in highly regulated industries. The launch came alongside a strategic partnership with AI legal platform Harvey and a collaboration with Anthropic. At its Investor Day a few days later, management named an "additional $30 billion Agentic Opportunity" on top of its existing $20 billion addressable market and set a target of $1 billion in ARR by FY29.
Those are big numbers. But they're not what makes Celeste worth studying if you run a mid-market law firm, accounting firm, or private capital shop. What matters is the architecture, and what it reveals about what enterprise-grade AI for regulated firms actually has to look like.
The Real Problem with Generic AI in Professional Services
Most AI tools that land in professional firms weren't designed for them. They were designed for general enterprise use, then sold into legal, accounting, and capital markets teams on the strength of surface-level capabilities: drafting, summarization, Q&A, research.
Those capabilities are genuinely useful. The compliance problem that comes with them is less so. Consumer-grade and even mid-market enterprise AI tools typically handle governance the same way: they rely on users to self-police. The system doesn't know which information is behind an ethical wall. It doesn't know that two attorneys on different sides of a deal shouldn't be accessing the same matter data. It doesn't track whether an AI-generated output touched MNPI (material non-public information). Audit trails, when they exist at all, are retrofitted rather than native.
This isn't a theoretical risk. The ABA's Model Rules of Professional Conduct, particularly Rule 1.6(c) on confidentiality and Rules 5.1/5.3 on supervisory responsibility, make clear that attorneys are professionally responsible for any unauthorized disclosure resulting from inadequate security measures, including those in their technology stack. Courts have handed down sanctions for AI-related errors. Regulators are paying close attention.
The problem isn't that AI can't work in these environments. The problem is that most of the AI tools pitched to professional firms were built for different environments, and compliance was an afterthought.
Intapp took a different starting point with Celeste.
What "Governed Agentic AI" Actually Means
Celeste's central design premise is what Intapp calls "professional compliance by design." Every agent action, from analyzing deal data to flagging conflicts to generating a client advisory report, is subject to the firm's compliance policies before it executes, not after.
That distinction matters enormously in practice. Three architectural principles make it real.
Auditability: Every Action Has a Receipt
Celeste provides what its product page describes as "searchable records of every agent action, including inputs, data sources, reasoning, and outputs." When an auditor, a regulator, or opposing counsel asks "what did your AI do, and why?" a complete audit trail gives you a precise answer. A system that only produces outputs does not, and in a regulatory inquiry that difference is everything.
For mid-market firms, this has immediate practical implications. If your current AI stack can't tell you which data sources fed a particular output, which user had access when the query ran, and what the agent's reasoning path was, you have an audit gap. That gap is a liability.
The Agent Inherits Your Firm's Permission Structure
Rather than creating a separate permission layer for AI, Celeste's agents inherit and enforce the permissions that already exist in Intapp's compliance infrastructure, including ethical walls.
The Harvey partnership makes this concrete. Harvey is now used by over 50% of AmLaw 100 law firms for legal research, drafting, and analysis. But until very recently, Harvey's AI didn't know which matters were behind ethical walls in a given firm. Lawyers had to self-police. Under the new integration, Intapp Walls for AI policies automatically sync with Harvey's access and sharing controls across its Assistant, Vault, and Workflows products.
Harvey CEO Winston Weinberg put it plainly: "Legal teams have spent years building rigorous professional responsibility standards in Intapp, and they need those standards to follow them into every tool they use."
That principle, that compliance rules should follow the AI rather than just the humans, is the right framework for any mid-market firm evaluating agentic tools. The question to ask any vendor: do your agents inherit our existing access controls, or do we rebuild them from scratch inside your platform?
The Harder Problem: What the Agent Knows It Can't Touch
Respecting access controls on data retrieval is necessary but not sufficient. The agent's entire decision-making process needs to be compliance-aware.
Celeste handles this through the Celeste Context Engine, a layer that grounds agent actions in the firm's proprietary data, relationship maps, entity structures, and compliance policies. The agent doesn't just know what the firm knows; it knows what it's allowed to do with that knowledge.
In private capital, this means an agent handling fundraising workflows will enforce MNPI controls and investor governance requirements as it operates, not as a separate post-processing check, but as part of its core reasoning. In legal, an agent running matter intake and conflicts clearance will continuously monitor for conflicts rather than check once at initial intake.
The difference between "check at intake" and "continuously monitor" is significant from a malpractice standpoint. The former catches conflicts that exist when a matter opens. The latter catches conflicts that emerge as new matters are opened, new parties are added, or lateral hires bring new relationship histories into the firm.
The 1,000+ Use Case Library: Why Pre-Built Matters
Architecture discussions often skip past the deployment question. Building compliant AI workflows from scratch is expensive, slow, and requires expertise most mid-market firms don't have in-house.
Celeste launches with over 1,000 pre-built, industry-specific use cases across accounting, investment banking, legal, private capital, and real assets. For accounting firms, that includes client onboarding and KYC, audit workpaper preparation, tax research and memo drafting, and regulatory compliance monitoring. For private capital, it covers opportunity screening, due diligence synthesis, portfolio value creation, and investor governance and compliance.
Each use case ships with compliance guardrails already configured for that industry context. An accounting firm's KYC agent and a private capital firm's investor onboarding agent start from different compliance baselines, not a generic template they have to retrofit.
Jampol's framing positions pre-built use cases as the mechanism that moves firms from experimenting with AI to actually running on it: "They are on the verge of an agentic future where agents will fundamentally reshape how they operate, compete, and ultimately win."
Pre-built doesn't mean rigid. Celeste also supports custom agents. But for firms that lack the engineering resources to build from scratch, pre-built with compliance embedded is dramatically better than bolting compliance onto a generic tool after the fact.
The Partner Ecosystem: Interoperability Without Governance Loss
Celeste's MCP (Model Context Protocol) integrations address one of the thorniest problems in enterprise AI adoption: how do you connect multiple AI tools without creating governance gaps at the integration points?
The Anthropic collaboration brings Claude's reasoning capabilities into the Intapp ecosystem, with Intapp controlling what firm data Claude can access and under what compliance conditions. The Microsoft integration lets Copilot users access Intapp data, with Celeste serving as the compliance layer that determines what data is exposed, including enforcement of ethical walls, MNPI controls, and independence requirements.
The pattern across all three partnerships is the same: Celeste acts as the governed access layer. Frontier models handle the reasoning, and they're good at it. What they can't supply is institutional memory: which relationships are conflicted, which data sits behind an ethical wall, which investor has a side letter that limits what they can see. Without that, you have a powerful reasoning engine with no map of the terrain it's not allowed to cross.
For mid-market firms evaluating multi-vendor AI strategies, this is the lesson. The question isn't just "which AI tools are best?" It's "what serves as the compliance layer when those tools interact?"
What Happens When the Agent Gets It Wrong?
That's the objection keeping most mid-market professional services firms on the AI sidelines, and it deserves a direct answer.
For most current AI deployments in professional services, the honest answer is that nobody fully knows, and the firm holds the liability. Celeste's architecture doesn't make AI infallible; nothing does. But it changes the accountability structure in ways that matter.
The "professional-in-the-loop" design means human review, approval, and refinement are built into agentic workflows at the points where judgment matters most. The agent automates the high-volume, structured portions of a workflow and surfaces outputs for professional validation before consequential actions are taken.
When something does go wrong, complete audit trails mean the firm can reconstruct exactly what happened: what data the agent accessed, what it produced, who reviewed it, and what decision followed. The ability to demonstrate a reasonable process, even after an error, is materially different from having no record at all.
Compliance-aware orchestration means the agent is less likely to make certain categories of error in the first place, specifically the errors that arise from accessing information it shouldn't or acting on inputs that cross compliance boundaries.
None of this is a guarantee. But it moves the answer from "we hope it doesn't happen" to "we have a documented, defensible process."
Questions Worth Putting to Every AI Vendor
The five questions that Celeste's architecture teaches you to ask are genuinely useful regardless of which platform ends up on your shortlist.
The single most eliminating question is probably the one on ethical walls: how does your platform handle information barriers between matters or clients? If the answer involves users manually managing access, that's a reliable red flag. The agent itself should be wall-aware, and the policy enforcement should be automatic, not dependent on individuals remembering to apply it.
Close behind it: do your agents inherit our existing access controls, or do we configure a separate permission layer? Separate layers mean double maintenance and gap risk. And can you show me the full audit trail for a specific agent action, including inputs, data sources, reasoning, and outputs? If the answer is "we have logs," push harder. Logs and audit trails are not the same thing.
Two more are worth adding to the list. Where does human review happen in the workflow, and is it enforced or optional? Optional human review in high-stakes workflows is not meaningful oversight. And: what's your model for compliance at integration points with other AI tools? In a multi-vendor AI stack, governance gaps emerge at the seams. Know who owns the compliance layer across your tools.
What Firms Should Take From This
Celeste is in limited availability and won't be right for every firm. The architecture it embodies, though, is the clearest public articulation to date of what compliance-first AI actually has to look like in regulated professional services.
Firms that use AI well over the next few years will have figured out governance before they scaled capability. The three non-negotiables Celeste demonstrates, auditability, role-based access inheritance, and compliance-aware orchestration, are the right criteria for evaluating any agentic AI solution, whether that's Celeste or something else entirely.
The firms that get there first won't just be safer. They'll be faster.