Run the Plan Then Verify Loop
Use plan mode, task decomposition, and explicit verification criteria for real delivery work.
Lesson outcome
You will leave this lesson with a standard execution loop for any meaningful task: clarify the goal, break it into checkable steps, do the work in bounded chunks, and verify the result against a real acceptance standard.
Why this matters in an agency
Agency work fails quietly when tasks skip the planning step. A brief turns into implementation without anyone checking prerequisites. A landing page draft goes out without comparing it to the offer. A reporting change ships without confirming the data source. Plan mode matters because it forces the agency to expose assumptions before they become cleanup work. Verification matters because even strong model output still needs a standard that says what "done" means.
Inputs, tools, and prerequisites
Use a live task from the agency rather than an invented example. Good candidates are writing a pricing page, cleaning a CRM workflow, scoping a client dashboard, or preparing a reporting template. You also need a place to record the plan, the verification checks, and any lessons captured after the work is done.
Step-by-step walkthrough
Begin by translating the task into a real objective with constraints. "Improve reporting" is not enough. "Rebuild the weekly client report so account managers can produce it in under thirty minutes while preserving the five metrics leadership actually cares about" is better. Once the objective is specific, ask Claude for a plan that includes dependencies, unknowns, and a proposed sequence of work. Review the plan before execution starts. This is the point where missing context should surface.
Now split the work into bounded subtasks. Bounded means the subtask has a narrow write scope, a visible deliverable, and a clear stop condition. For example, "audit the existing report fields" is bounded. "Fix reporting" is not. If you use subagents or separate review passes, assign each a distinct problem. One can gather evidence, one can draft, and one can review. The important point is that no subtask should silently expand into a second project.
Before execution begins, define the verification rules. These should be concrete: does the report still reflect the approved KPIs, does the workflow still pass test data, does the copy match the offer, does the page build successfully, does the SOP reflect the actual implementation. Verification should not be "looks good." It should be a short list of observable checks.
After the work is complete, run the verification pass as a separate step. That can be Claude checking against a checklist, Codex reviewing for gaps, a test suite, or a manual review of the exact conditions that matter. If the verification pass finds drift, route back into the plan rather than patching blindly. Close by logging one or two lessons that would have made the task cleaner next time.
Failure modes and verification checks
The common failure modes are skipping acceptance criteria, creating subtasks that are too broad, or confusing execution output with verified output. Check yourself by asking: can each subtask be completed independently, is the definition of done measurable, and is there a separate review step after implementation. If not, the loop is incomplete.
Implementation checklist
- Choose one live agency task.
- Rewrite it as a specific objective with constraints.
- Break it into bounded subtasks.
- Define verification rules before execution.
- Run a separate review pass after the work is done.
- Capture at least one lesson into your notes.
Immediate next action
Take a current deliverable and force it through the plan-then-verify loop before the day ends. Once you feel the difference, this becomes the default operating pattern.