Parallel Execution
Parallel execution allows multiple nodes to run concurrently, dramatically improving workflow performance for independent operations.Basic Parallel Execution
Use theparallel() function to execute nodes concurrently:
Result Collection
Parallel nodes return a dictionary with terminal node names as keys:Using Results in Next Node
The next node receives the dictionary of results:Concurrency Control
Limit the number of concurrent executions:Why Limit Concurrency?
- Memory constraints: Each concurrent task uses memory
- API rate limits: Avoid overwhelming external services
- CPU limits: Prevent system overload
- Resource fairness: Share resources with other processes
Timeout Control
Set a timeout for the entire parallel block:Per-Branch Timeout
Timeout applies to the entire parallel block, not individual branches:Resource Optimization
Parallel nodes automatically optimize resource usage:Resource Allocation
The system automatically:-
Analyzes available resources:
- CPU cores available
- Available memory (70% utilization target)
- Estimated memory per node (50MB default)
-
Calculates optimal workers:
-
Creates execution batches:
- Balances load across workers
- Prevents resource exhaustion
- Maintains system stability
Manual Resource Configuration
For fine-grained control:Error Handling
Individual Branch Failures
If any branch fails, the entire parallel block raises an error:Graceful Error Handling
Handle errors within nodes for graceful degradation:Workflow Integration
In Sequences
Parallel nodes integrate seamlessly with sequential workflows:Nested Parallel Execution
Parallel blocks can contain parallel blocks:With Decision Nodes
Combine parallel execution with conditional routing:Performance Considerations
When to Use Parallel Execution
Good candidates:- Independent operations (no data dependencies)
- I/O-bound tasks (API calls, file operations)
- CPU-bound tasks with sufficient cores
- Long-running operations
- Operations with dependencies between them
- Very fast operations (overhead exceeds benefit)
- Memory-intensive operations (without concurrency limits)
- Operations requiring strict ordering
Overhead Analysis
Resource Impact
| Branches | No Limit | max_concurrent=2 | Optimization |
|---|---|---|---|
| 3 | 3 workers | 2 workers | Auto-calculated |
| 10 | 10 workers | 2 workers | CPU-limited |
| 100 | 100 workers ⚠️ | 2 workers | Memory-limited |
max_concurrent or enable optimization for >5 branches.
Best Practices
Use parallel for independent operations only
Use parallel for independent operations only
Set appropriate concurrency limits
Set appropriate concurrency limits
Use timeouts for long-running operations
Use timeouts for long-running operations
Handle errors gracefully
Handle errors gracefully
Name nodes clearly for result access
Name nodes clearly for result access

