Table of Contents
Learning Objectives
By the end of this lesson, you will be able to:
- Define task decomposition in the context of agentic systems
- Identify the three decomposition strategies and when to apply each
- Explain how decomposition granularity affects reliability and debugging
- Apply decomposition principles to a realistic scenario
Task Decomposition Strategies
What Task Decomposition Is
Task decomposition is the process of breaking a large, complex goal into smaller, independently executable subtasks that an agentic system can process reliably. Rather than instructing a single agent to complete an entire workflow, the orchestrator identifies the constituent operations, defines the dependencies between them, and assigns each operation to the appropriate agent or process.
Decomposition is a structural requirement for any agentic system that handles goals complex enough to exceed what a single context window can reliably manage. Without deliberate decomposition, agents attempt to reason across too many concerns simultaneously — producing outputs that are difficult to verify, difficult to debug, and prone to error at the points where those concerns intersect.
Why Decomposition Matters
Four properties of well-designed agentic systems depend directly on decomposition quality.
Context limits. A single Claude instance has a finite context window. A task that requires retrieving data from multiple sources, processing it through several transformations, and producing a structured output will accumulate context that eventually degrades the agent’s reasoning. Decomposing the task distributes that context burden across multiple agents, each of which operates within a manageable window.
Reliability. A subtask with a narrow, well-defined input and output is easier to execute correctly than a broad, open-ended task. When each subtask is scoped clearly, the agent responsible for it can focus its reasoning on a single concern. Errors that do occur are isolated rather than entangled with the rest of the workflow.
Parallelisation. Independent subtasks can run simultaneously across separate agents. Identifying which subtasks are genuinely independent is only possible once the task has been decomposed explicitly — without decomposition, a sequential agent must complete each operation before starting the next, regardless of whether those operations share any dependencies.
Testability. Each subtask can be evaluated in isolation. You can run a worker agent against a controlled input and verify that its output meets the expected structure and content criteria before integrating it into the full workflow. This independent testability is not available when all operations are handled within a single undifferentiated agent.
Three Decomposition Strategies
The appropriate decomposition strategy depends on the relationship between subtasks — specifically, whether each subtask depends on the result of a previous one, whether subtasks are independent, or whether the path through the workflow changes based on intermediate results.
Sequential decomposition applies when each subtask depends on the output of the subtask that precedes it. The workflow forms a chain: Subtask A must complete before Subtask B begins, because B’s input is A’s output. This strategy is straightforward to implement and reason about. Its primary failure mode is linear error propagation — a flawed output at any point in the chain degrades all subsequent steps unless explicit validation occurs between transitions.
Sequential decomposition is appropriate when there is a strict logical dependency between steps and when the output of each step meaningfully transforms the data before passing it forward.
Parallel decomposition applies when subtasks are independent of one another and can execute simultaneously. Multiple agents receive their inputs at the same time, process them concurrently, and return results to the orchestrator, which then merges the outputs into a unified result. This strategy reduces total execution time for tasks with many independent operations.
The critical design requirement for parallel decomposition is the merge step. Outputs from independently operating agents will not automatically be consistent in structure, terminology, or scope. The orchestrator must apply a defined merging strategy and must handle cases where one or more parallel agents return a failure or an incompatible result.
Conditional decomposition applies when the path through the workflow depends on intermediate results. The orchestrator executes an initial subtask, evaluates the result, and selects the next subtask from a set of defined branches based on that evaluation. This is branching logic applied at the agentic level.
Conditional decomposition is appropriate when the nature of the input determines what processing is required — for example, when a document classification step determines which extraction agent handles the document downstream. Every branch must be explicitly defined, including branches that handle unexpected or out-of-scope results. An undeclared branch produces undefined system behaviour.
How to Decide Granularity
Granularity refers to the size and scope of each subtask. Errors in either direction produce distinct failure patterns.
Too coarse means each subtask is too broad. The agent assigned to a coarse subtask must reason across multiple concerns within a single context, increasing the probability of reasoning errors and making it difficult to attribute failures to a specific operation. Coarse decomposition transfers the complexity of the original task to the subtask unchanged.
Too fine means each subtask is too narrow. Granularity at this level produces coordination overhead that grows disproportionately relative to the complexity of the work. The orchestrator spends more effort managing subtask inputs, outputs, and sequencing than the subtasks themselves require. A single logical operation spread across many separately tracked steps also makes debugging harder, not easier.
The appropriate granularity places each subtask at the boundary of a coherent, independently verifiable operation — one thing that produces a result another agent or process can consume directly, without requiring further decomposition at the receiving end.
Who Performs the Decomposition
Decomposition logic belongs in the orchestrator, not in worker agents. The orchestrator’s system prompt defines the workflow structure, the subtask definitions, the sequencing or branching rules, and the merge strategy. Worker agents receive pre-decomposed subtasks as their inputs. They execute and return results without knowledge of the broader workflow.
When decomposition logic leaks into worker agents — when a worker begins making routing decisions, spawning its own subtasks, or redefining its scope based on input it was not designed to evaluate — the system’s behaviour becomes unpredictable and difficult to audit. Worker agents that exceed their defined scope introduce coordination ambiguity that structured decomposition is designed to prevent.
Failure Modes in Task Decomposition
Three failure modes recur across poorly designed decompositions.
Undefined subtask dependencies occur when the orchestrator dispatches subtasks without specifying which subtask’s output another depends on. Agents operate without the context they require, or the orchestrator attempts to aggregate results before all necessary inputs are available.
Shared state conflicts in parallel decomposition occur when two agents operating simultaneously read from or write to the same resource without coordination. This produces race conditions — outcomes that depend on which agent completes first — making the system’s behaviour non-deterministic.
No fallback when a subtask fails leaves the orchestrator without a defined path forward. If a worker agent signals failure and the orchestrator has no fallback instruction, the workflow stalls or proceeds with an incomplete result set, producing a degraded or incorrect final output.
Worked Example: Competitive Analysis Report
Consider the task: “Prepare a competitive analysis report comparing three software vendors across pricing, features, and customer support.”
Decomposed, the workflow contains four subtasks.
Research uses parallel decomposition. Three independent subtasks run simultaneously — one per vendor — each retrieving current information on pricing, features, and customer support for its assigned vendor. The orchestrator merges three structured data sets once all three agents complete.
Extraction uses sequential decomposition. A single agent receives the merged research data and extracts a standardised set of data points for each vendor against the defined comparison criteria. This subtask depends on the completion and merging of the research phase and cannot begin before that merge is available.
Comparison uses sequential decomposition. A single agent receives the extracted data set and produces a structured comparison table, identifying where vendors align and diverge across each criterion. This subtask depends directly on the extraction output.
Formatting uses conditional decomposition. The formatting agent receives the comparison output and applies the appropriate report template based on the output format specified in the original request. If the request specifies a slide deck, the agent applies a presentation structure. If it specifies a written report, it applies a document structure. The path branches on the output format requirement.
Each subtask has a defined input, a defined output, and a clear dependency relationship with adjacent subtasks. Each can be tested independently. Failures are attributable to a specific subtask rather than to the workflow as a whole.
Key Takeaways
- Task decomposition breaks a complex goal into independently executable subtasks with defined inputs, outputs, and dependencies — it is a structural requirement for reliable agentic systems, not an optional optimisation.
- The three decomposition strategies — sequential, parallel, and conditional — each suit different dependency structures, and most real workflows combine all three across different transitions.
- Granularity must be calibrated: subtasks that are too coarse transfer the original complexity to the worker agent; subtasks that are too fine generate coordination overhead that outweighs the benefit of decomposition.
- Decomposition logic belongs in the orchestrator; worker agents that begin making routing or scoping decisions outside their defined role introduce coordination ambiguity that degrades system reliability.
What Is Tested
Exam questions on task decomposition present a complex task description and ask candidates to select the correct decomposition strategy for a specific transition within that task — for example, determining whether two subtasks are genuinely independent and suited to parallel execution, or identifying where a conditional branch is required. Candidates are also asked to evaluate granularity choices, identifying whether a described subtask is too coarse to execute reliably or too fine to justify its coordination overhead. A third category of question presents a decomposition design and asks candidates to identify a missing dependency definition, an undeclared conditional branch, or a worker agent that has been incorrectly assigned decomposition responsibilities that belong to the orchestrator.