Time to read: 14 minutes Time to apply: 30 minutes to set up first orchestration Prerequisites: Patterns 1-5, especially Tool Composition
They use one agent for everything. Code, research, review, deployment — all in a single conversation. The agent's context fills with intermediate outputs. Research results mix with code snippets. The agent slows down, makes more mistakes, and eventually hits context limits.
Orchestration is the solution: splitting complex work across specialist agents working in parallel, each with its own context window, each focused on one thing.
Don't orchestrate everything. Single-agent work is faster for simple tasks. Orchestrate when:
The most common orchestration pattern: three agents, three jobs, parallel execution.
Main Agent (you)
├── Research Agent → "Research rate limiting strategies. Return structured comparison."
├── Review Agent → "Review PR #47. Check for security issues, missing tests, API consistency."
└── Build Agent → "Set up project skeleton: src/, tests/, pyproject.toml, CI config."
Each agent gets:
web_search, review agents get read_file + terminal, build agents get the full toolkitResults come back as summaries. You review, integrate, and move on.
Beyond parallel execution, orchestration enables specialists. A general agent can do anything adequately. A specialist agent with the right skills and tools does one thing exceptionally well.
Example specialists from actual usage:
spfx-local, spfx-heft-build-breakfixrequesting-code-review, systematic-debuggingsystematic-debugging, python-debugpy
The key: each specialist has access to skills the general agent doesn't. The SPFx specialist loads spfx-heft-build-breakfix with all 6 known failure modes and their fixes. A general agent would diagnose from scratch every time.
The biggest orchestration mistake: sending too little context. An agent that doesn't know your project structure, conventions, or constraints will invent its own — and get everything wrong.
Good context packing:
Goal: Review PR #47 for security issues and test coverage.
Context: Project uses FastAPI + SQLite. Tests in tests/ with pytest.
Conventions: Run tests with `python3.11 -m pytest -n 4 -v`.
Key files: api/main.py, src/auth.py, tests/test_auth.py.
PR adds OAuth2 flow. Check: token validation, refresh mechanism, CSRF protection.
Bad context packing:
Goal: Review PR #47.
The difference: the first agent produces a review that matches your project. The second agent reviews against generic standards and misses project-specific issues entirely.
A real example from building the Knowledge Platform:
Task 1 (Research Agent): "Research FastAPI rate limiting libraries — slowapi, fastapi-limiter, custom middleware. Return comparison with pros/cons, token overhead, and production readiness."
Task 2 (Build Agent): "Create src/rate_limit.py with TokenBucket implementation. Follow existing patterns in api/main.py. Include type hints and docstrings."
Both run in parallel. Total time: 4 minutes (the longer of the two). Sequential: 8+ minutes.
Main Agent: "Build a SharePoint web part for employee directory."
→ Spawns SPFx Specialist with spfx-local skill
→ Specialist scaffolds, builds, fixes build errors using spfx-heft-build-breakfix
→ Returns: working web part file + build verification
Main Agent: "Review the web part for accessibility."
→ Spawns Code Reviewer with requesting-code-review skill
→ Reviewer checks: ARIA labels, keyboard navigation, colour contrast
→ Returns: 3 issues found, 2 suggestions
The main agent never touched SPFx specifics or accessibility standards — it orchestrated specialists who already have that expertise.
Time investment: 5 minutes to set up parallel execution. Return: 2-3x throughput on multi-stream work.
Pattern 7: Pipelines — Agents that run while you sleep. Cron jobs, scheduled builds, monitoring — zero human intervention. This is where agent work becomes infrastructure.
Pattern 6 of 10. From the Works With Agents methodology.
Suggest an improvement, report an error, or just say hi.