axolotl
axolotl copied to clipboard
feat: do not find turn indices if turn is not trainable
Description
We were wasting compute trying to find the turns for indices we won't even train on. This reduces at least 1/2 computations done per sample (assuming assistant turns make up to 50% cases).
Secondly, the only case where we need the indices is when train_on_eot / train_on_eos is set to ALL, so we will do full computation.
Introduces a new warning if the last turn is not trainable (which is unusual):
"Last turn is not trainable, skipping having to find the turn indices. "
"This may cause incorrect last EOT/EOS token to be unmasked."
Motivation and Context
How has this been tested?
Ran on dummy dataset with benchmark below: https://github.com/axolotl-ai-cloud/axolotl/pull/2696#issuecomment-2893971301
Need eyes to double check.
Screenshots (if appropriate)
Types of changes
Social Handles (Optional)
Summary by CodeRabbit
Summary by CodeRabbit
- Bug Fixes
- Improved handling of non-trainable chat turns to prevent unnecessary processing and potential masking issues, with warnings logged if skipping may affect the last turn.
Walkthrough
A conditional check was added to the _tokenize_single_prompt method in the ChatTemplateStrategy class. This update skips processing of turns not marked as trainable, except for the last turn when certain training flags are set. A warning is logged if skipping the last turn may affect masking. No public API changes were made.
Changes
| File | Change Summary |
|---|---|
| src/axolotl/prompt_strategies/chat_template.py | Added logic to skip non-trainable turns in _tokenize_single_prompt, with a warning for last turn masking issues. |
Sequence Diagram(s)
sequenceDiagram
participant User
participant ChatTemplateStrategy
loop For each turn in prompt
ChatTemplateStrategy->ChatTemplateStrategy: Check should_train
alt should_train is False and not last turn (or last turn without special flags)
ChatTemplateStrategy-->>ChatTemplateStrategy: Skip turn (continue)
else should_train is False and last turn with special flags
ChatTemplateStrategy->ChatTemplateStrategy: Log warning
ChatTemplateStrategy-->>ChatTemplateStrategy: Skip turn (continue)
else
ChatTemplateStrategy->ChatTemplateStrategy: find_turn and set labels
end
end
Poem
In the garden of code where prompts are spun,
Some turns now rest, their training done.
Skipping and hopping, we log with care,
So the last token’s mask won’t snare.
With every check, our logic’s tight—
The rabbit’s code now hops just right! 🐇
[!NOTE]
⚡️ AI Code Reviews for VS Code, Cursor, Windsurf
CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Learn more here.
[!NOTE]
⚡️ Faster reviews with caching
CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure
Review - Disable Cacheat either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off theData Retentionsetting under your Organization Settings. Enjoy the performance boost—your workflow just got faster.
📜 Recent review details
Configuration used: CodeRabbit UI Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge Base: Disabled due to data retention organization setting
📥 Commits
Reviewing files that changed from the base of the PR and between dce9fa8cbfa1587f1af400254e1d3ec8458baef6 and 9bc6cc4a26f733e3073b960cb60ad6723e4a4be9.
📒 Files selected for processing (1)
src/axolotl/prompt_strategies/chat_template.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/axolotl/prompt_strategies/chat_template.py
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: PyTest from Source Dist (3.11, 2.7.0)
- GitHub Check: PyTest (3.11, 2.7.0)
- GitHub Check: PyTest from Source Dist (3.11, 2.6.0)
- GitHub Check: PyTest (3.11, 2.6.0)
- GitHub Check: pre-commit
- GitHub Check: PyTest from Source Dist (3.11, 2.5.1)
- GitHub Check: PyTest (3.11, 2.5.1)
- GitHub Check: pre-commit
✨ Finishing Touches
- [ ] 📝 Generate Docstrings
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Explain this complex logic.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai explain this code block.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and explain its main purpose.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Support
Need help? Create a ticket on our support page for assistance with any issues or questions.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai generate sequence diagramto generate a sequence diagram of the changes in this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
:loudspeaker: Thoughts on this report? Let us know!
On a 100k dummy tool dataset with 6 turns of short content, the time taken to tokenize went from a (three run) average of 83.3s -> 47.6s (decreased by about 42.8%). This change will be more impactful for those with even longer multi-turn conversation.