jan icon indicating copy to clipboard operation
jan copied to clipboard

feat: Change parameter settings explainer

Open imtuyethan opened this issue 1 month ago • 0 comments

Problem

The current explainer for parameters settings are not clear enough according to user feedback: Screenshot 2024-05-14 at 11 44 54 PM

We need to guide user's how to set it effectively (directly in app, not just through docs).

Success Criteria

Explainers should be short, precise, straight to the point

Setting parameters explainer

Inference Parameters

Parameter Description
Temperature Influences the randomness of the model's output. A higher value leads to more random and diverse responses, while a lower value produces more predictable outputs.
Top P Set probability threshold for more relevant outputs. A lower value (e.g., 0.9) may be more suitable for focused, task-oriented applications, while a higher value (e.g., 0.95 or 0.97) may be better for more open-ended, creative tasks.
Stream Enables real-time data processing, which is useful for applications needing immediate responses, like live interactions. It accelerates predictions by processing data as it becomes available.
Max Tokens Sets the upper limit on the number of tokens the model can generate in a single output. A higher limit benefits detailed and complex responses, while a lower limit helps maintain conciseness.
Stop Sequences Defines specific tokens or phrases that signal the model to stop producing further output, allowing you to control the length and coherence of the output.
Frequency Penalty Modifies the likelihood of the model repeating the same words or phrases within a single output. Increasing it can help avoid repetition, which is useful for scenarios where you want more varied language, like creative writing or content generation.
Presence Penalty Reduces the likelihood of repeating tokens, promoting novelty in the output. Use a higher value for tasks requiring diverse ideas.

Model Parameters

Parameter Description
Prompt Template A predefined text or framework that guides the AI model's response generation. It includes placeholders or instructions for the model to fill in or expand upon.

Engine Parameters

Parameter Description
Context Length Sets the maximum input the model can use to generate a response, it varies with the model used. Higher length is better for tasks needing extensive context, like summarizing long documents. Lower length can improve response time and reduce computing needs for simple queries.

imtuyethan avatar May 14 '24 17:05 imtuyethan