Refactor Prompt Generation and Context Handling
Refactor Prompt Generation and Context Handling
Summary
This Pull Request (PR) introduces significant improvements to how the SheLLM project handles context and prompt generation. The primary focus is solving a Known Issue from the project's README:
"Context should take the last output with higher priority and not the previous commands."
Problem
Previously, the system passed the entire terminal session history as a single object to the LLM, without distinguishing between different elements or prioritizing recent output. This approach made it impossible to emphasize the most contextually relevant information—namely, the last command and its output.
Additionally, only the outputs of commands were included in the session history, leaving out the commands themselves. This omission severely hindered context awareness, as the LLM had no way of knowing what user input produced the outputs it was processing.
Changes
-
Improved Context Prioritization:
- Introduced a mechanism to separate the last command and last output from the rest of the context.
- The system now dynamically builds each query, placing higher emphasis on the last command and its output. This ensures that the most recent context is treated as the most relevant.
-
Complete Session History:
- Revised the history handling to include both commands and their corresponding outputs. This provides a comprehensive view of the session, enabling the LLM to understand the sequence of interactions.
-
Dynamic Query Construction:
- Queries are now constructed dynamically to reflect both the emphasis on the last interaction and the inclusion of the full command-output history.
- This approach improves contextual relevance and enhances the LLM’s ability to provide accurate and context-aware responses.
Notes
- These changes address the contextual relevance issue while also improving the overall quality of the session history passed to the LLM.
- The implementation has not been tested with the
groqmodel, but no compatibility issues are anticipated. - Typehints and validation has been added here and there.
This PR enhances the contextual handling mechanism in SheLLM, making it more dynamic and contextually aware. Feedback and additional suggestions are welcome!
https://github.com/astral-sh/ruff
^ vij tazi tejka gazariq
https://github.com/astral-sh/ruff
^ vij tazi tejka gazariq
Ruff is life ❣️