Copilot Chat Sample Application: Lower prompts.json verbosity
The prompt components in prompts.json (for the Copilot Chat Sample Application) are very verbose. I think it would be valuable to reduce their token length in order to free up more tokens for prompt content and/or faster processing.
An alternative or additional possibility could be to highlight in the sample readme that these prompts are examples and creating good prompt components is an important part of building on this sample, as it can significantly impact the AI's performance (perhaps with a reference to the documentation on LLM AI Prompts).
The sample is a really great entry point for quickly getting some hands-on experience with semantic-kernel, but I think it's too easy to miss the importance of these prompt templates hidden away in a json file.
I'm also slightly skeptical that the level of verbosity in these templates is really necessary to achieve decent results. I know this is a work in progress, and the prompts are likely biased to favor safety over other concerns, but perhaps this could then be highlighted in the sample readme.
@PederHP , thanks for the feedback here. We are looking into it.
Thanks for the feedback on chat copilot. As a reference app, it's goal was to show what was possible, so it purposefully used as many tokens as possible. Unfortunately, we do not currently have plans to change the verbosity of the current prompts.