When a firm decides to adopt AI seriously, the most common first move is designating one person as the internal lead. That person needs to produce results before they can build credibility with the rest of the team. Generic training does not get them there — it teaches the tool without the judgment needed to use it on real work.
This engagement closes that gap. Every session is structured around your current cases, your deadlines, and your output format. The AI Champion does not leave with a certificate. They leave with production capability and a standard they can demonstrate to colleagues.
1. What this engagement is
The 1:1 AI Champion Enablement is a direct coaching engagement built around the specific role of your designated internal AI lead. Sessions are scheduled around active casework so training happens in context, not in isolation. Over the course of the engagement, the AI Champion develops production-level capability across intake, research, source verification, output drafting, and documentation tasks relevant to their practice area.
2. Who it is for
This engagement is designed for one individual who has been given, or is taking on, the internal AI lead role at a firm or agency. It is well suited for:
- Senior investigators tasked with establishing AI research standards for a team
- Associates or paralegals appointed as internal AI leads at law firms
- Research editors at news organizations piloting AI in the editorial workflow
- Analysts at law enforcement units who will train or advise colleagues on AI tool use
The engagement works best when the AI Champion has at least one active case they can bring into training sessions, and when they have the authority or mandate to document and share the standard they develop.
3. How sessions are structured
Sessions are direct and practical. Before each session, the AI Champion identifies a current task or deliverable they are working toward. That becomes the session's center of gravity. We work through the task together — building prompts, verifying sources, catching failure modes, and producing an output that meets the firm's quality standard.
Between sessions, the AI Champion applies what was covered on their own. The next session opens with a review of what worked, what broke, and what needs refinement. This creates a rapid iteration loop that classroom training cannot replicate.
Sessions can be conducted remotely or, for deeper alignment on complex or high-stakes matters, on-site.
4. What the AI Champion walks away with
- A role-specific playbook documenting the prompts, workflows, and verification checkpoints that work for their practice area
- Completed live-case outputs produced during training that serve as quality reference examples
- A repeatable quality-control checklist aligned to the firm's output standards and review process
- The ability to run AI-assisted research independently on active cases without oversight
- A clear articulation of where AI helps, where it does not, and how to communicate that distinction to colleagues and leadership
5. Setting the standard for the rest of the team
An AI Champion's most important function is not their own productivity. It is the standard they set for everyone else. When the Champion can demonstrate a clean workflow, produce source-cited outputs, and explain exactly what verification steps they applied, that becomes the team's reference point.
This engagement is designed with that multiplier effect in mind. The playbook is written to be shared. The checklist is formatted for team use. The outputs are labeled in a way that makes the method transparent to any reviewer.
6. Format and duration
The engagement runs on a session cadence matched to the AI Champion's workload and current matter pipeline. Most engagements involve six to ten direct sessions over three to six weeks, with asynchronous support between sessions for quick questions or output reviews.
A completion review at the close of the engagement assesses readiness for independent operation and identifies any remaining gaps before the formal engagement ends.
7. Scope and expectations
This is a coaching engagement for one person on one role. It does not cover organization-wide training rollouts, software licensing decisions, or technical infrastructure. If your team needs a broader workflow baseline first, the Operational AI Enablement Pilot is the recommended starting point.
The AI Champion should have active cases they can bring to sessions. Purely hypothetical or historical simulations are acceptable substitutes but produce slower results than live work.
Develop your internal AI lead around real casework
The fastest way to build lasting AI capability in a firm is to develop one person who can demonstrate the standard, document the method, and train others from actual production experience. This engagement builds that person.