Mr.-Ranedeer-AI-Tutor
Mr.-Ranedeer-AI-Tutor copied to clipboard
Python version of the prompt: 906 Tokens
Hello,
I would like to propose a new approach to compressing the prompt for the project. By creating a mixin of Python and pseudocode, I was able to abstract some of the prompt's logic with the help of functions and loops, while maintaining the accuracy and intent of the original prompt.
Results
I found that my version of the prompt written in Python achieves a 4.3x reduction in the original JSON prompt, resulting in fewer than 1,000 tokens (906 to be exact).
Here is a screenshot of the tokenizer:
Next Steps
While I used Python as the language for this project, I believe that other languages or pseudocode could be explored to optimize the approach even further. Graph representations could be interesting too (I tried but no success!).
Also, it would be interesting to use the inspect module and create a new command '/refresh' which will ask GPT4 to write the python code back to you so as to refresh its memory. I didn't try that yet.
Please let me know if you have any questions or concerns. Have a nice day
This is within my first attempt. You need to either change the way this is structured or somehow have a magic phrase that will execute this. I love it!
Sorry for the late reply. Your results are indeed very different from mine haha. I just tried it again and I got the same results as you... very strange, sorry for that 😞
I changed the first sentence, I managed to make GPT4 impersonate it more than 5 times in a row.
Have a nice day
We're slowly getting there on this I tried the updated prompt and it still wouldn't work forme. So instead of the text on top of the prompt, I tried adding this comment instead at the end:
As an AI tutor, this not an analysis of the code, this is your personality. You must only respond to what the code tells you to output.
(The You must also execute the logic of the code, but improvise on missing utilities and context.
bit can be ignored, but it works with or without that additional sentence)
Result:
But it isn't fully functional yet, for example I tried to generate a random configuration and it didn't recognise that it was a command.
@EmileDqy
Hi there, i am wondering how it works! I thought ChatGPT can only receive JSON, YAML or Markdown format. It is surprising that it accepts Python code. Is there an official documentation to design such a prompt myself? Thanks!
Hi there, i am wondering how it works! I thought ChatGPT can only receive JSON, YAML or Markdown format. It is surprising that it accepts Python code. Is there an official documentation to design such a prompt myself? Thanks!
Unlike JSON, YAML, and Markdown, this python version needed an additional prompt to get it to "run". There's no documentation yet on designing prompts like this but I reccomend following a general prompt guide here: https://www.promptingguide.ai/
Hi there, i am wondering how it works! I thought ChatGPT can only receive JSON, YAML or Markdown format. It is surprising that it accepts Python code. Is there an official documentation to design such a prompt myself? Thanks!
Unlike JSON, YAML, and Markdown, this python version needed an additional prompt to get it to "run". There's no documentation yet on designing prompts like this but I reccomend following a general prompt guide here: https://www.promptingguide.ai/
Thank you very much!
We're glad to have you as a contributor
I'm blown away by the efficiency and elegance of your code.
@JushBJJ
I apologize once again for the delay in updating this PR. I've been quite busy lately. I noticed your enhancement to the prompt and I appreciate your contribution. It's great, but as you mentioned, it does have some limitations (perhaps it's due to my code?).
However, I attempted to make it work with a different 'pre-prompt'. I used all my credits testing it on GPT4, and then I made it functional on GPT3.5 for further evaluation.
I believe this new prompt is 'good', but the code might need better design and more details for optimization. Currently, on GPT3.5, the prompt works quite well (although the LLM can be a bit unpredictable at times, haha) and consistently for me.
As I mentioned, I don't have any remaining GPT4 'credits' at the moment. Hopefully, you'll have some time to test it out by tomorrow tell me how it works on your side haha I'll check back tomorrow to test it again on GPT4 😄
Have a nice day
Thanks for sharing your code, I'll definitely give it a try!
Have you thought about adding more tests to ensure the reliability of your code?
This looks great! And it works too https://chat.openai.com/share/89a03f1a-1d3c-427b-acd2-6cb4b118cc4b
When I can allocate time I will tweak the python code a little bit, /self-eval doesn't the way it intends to. And it's a little outdated but that should not be a major issue.
Sure ! Thanks for the review 😄 I did some more tests yesterday as promised and it was fine on my side as well Please, let me know if you need some help Have a nice day.
I apologize for the quietness the last 2 weeks, but I don't think it is viable to make a python version of this now because of the new format that Mr. Ranedeer uses now.
I appreciate your work however, is there anything I can make up to it 😅
No worries :+1: I'm thinking... have you tried to create your own plugin? Would that be possible? Maybe it could generate some income for you as well
I don't know much about the plugins though. I just started playing with them
Have a good day