feat: let LLM Decides Number of Tasks After Reasoning About Task/PRD - aka no preset number of tasks
Motivation
Manual task breakdowns often miss nuance or are arbitrarily limited, resulting in inadequate planning or missed steps. Allowing the LLM to determine the number and scope of tasks ensures more intelligent, context-aware decomposition, improving coverage and efficiency.
Proposed Solution
- Upgrade task generation so the LLM reads the PRD and autonomously decides how many tasks are needed, based on reasoning about the requirements.
- Uses OpenAI (or other LLM) API, replacing the previous fixed-count generation logic.
- Integrates with the existing task creation flow, outputting tasks as Markdown files with YAML metadata.
High-Level Workflow
- Refactor prompt sent to LLM to instruct it to analyze the PRD and generate an appropriate set of tasks.
- Update parsing functions to handle variable-length task lists from the LLM's output.
- Validate and test output with a variety of PRDs.
- Update documentation to reflect the new smart task generation process.
Key Elements
- LLM prompt redesign for PRD-driven task count
- Parsing enhancements for flexible task output
- Integration with Markdown/YAML output pipeline
- Improved PRD coverage in task breakdown
Example Workflow
$ task-master generate-tasks --prd prds/authentication.md
→ Generated 7 tasks for project: Authentication
Implementation Considerations
- LLM API token and usage limits
- Output validation to prevent missing/overlapping tasks
- Backward compatibility: ensure legacy PRDs can be used
- Potential for variable LLM costs with larger PRDs
Out of Scope (Future Considerations)
- UI for manual task editing post-generation
- Real-time LLM feedback or iterative refinement
Good idea. I've already got this in there in a way:
Whatever number you give it, I gave it the freedom to completely override the number if logic calls for it.
I tried to avoid having an inbetween LLM call jsut for this, but I think we can probably add it as a flag
Something like --num-tasks=auto
and that would make it ask the AI to figure out the right number of tasks, then it would parse the PRD off that number
Alternatively, it might make sense to create support for analyze-complexity for a given document, which would produce the type of recommendations that could be ported to parse-prd (including the number of tasks to parse into based on the complexity/detail)
I've got a related tasks somewhere. Gonna have to find it.
Great that you will consider implementing it!
Whatever number you give it, I gave it the freedom to completely override the number if logic calls for it.
oh is that in the system prompt?
Something like --num-tasks=auto
indeed. It means the llm could return any number of tasks in the json object, so the parser should be updated accordingly (right?)
Alternatively, it might make sense to create support for analyze-complexity for a given document, which would produce the type of recommendations that could be ported to parse-prd (including the number of tasks to parse into based on the complexity/detail)
yes could be an alternative but it seems it would make the UX flow more complex?
@eyaltoledano @orakemu I think we can close this one as it's seems to be already implemented: https://github.com/eyaltoledano/claude-task-master/commit/5eafc5ea112c91326bb8abda7a78d7c2a4fa16a1