remix-project
remix-project copied to clipboard
Feedbacks and suggestions for AI in remix
Feedbacks
Here are all the AI-related features I've found so far on Remix:
- When the compiler throws an error, there is a 'ask gpt' button in the error message. Clicking it will explain the cause of the error and provide a solution.
- Right-click on a function in the editor, and there is an option -- 'Explain function'. Clicking it will explain the function.
- Right-click on a function in the editor, and there is an option -- 'Generate documentation'. Clicking it will generate comment documentation for that function.
- Type 'gpt
' in the terminal to customize the prompt. - Write a comment starting with '///' in the editor, like
/// Write a function that adds 2 uint256 numbers and returns the result
, and it will write a function based on the comment. - It predicts what you're going to write next based on the content from the first line of the contract file to where the cursor is.
1~4 are implemented based on the openaigpt, a built-in Remix plugin, which calls https://openai-gpt.remixproject.org/ (a backend service set up by the remix team, configured with an OpenAI key).
openaigpt
have nothing to do with Solidity copilot - Alpha
. The plugin that corresponds to Solidity copilot - Alpha
is copilot-suggestion
, which can be found in the plugin list. (I wonder why it's not hidden like openaigpt
, why let users can choose to active and deactivate it. Just wonder, not important).
There is a checkbox named Load & Activate copilot
in Solidity copilot - Alpha
. Only after checking it, 5 and 6 are available. But there's a difference between 5 and 6. Maximum number of words to generate
and Temperature
only apply to 6. Because only 6 is implemented based on copilot-suggestion
, while 5 is achieved by calling https://gpt-chat.remixproject.org/infer.
I have a question here, why does 5, which is not implemented based on copilot-suggestion
, have to be affected by the Load & Activate copilot
?
Another question is, is https://gpt-chat.remixproject.org/infer a model trained by the Remix team? I think it's not based on openai, because it is a bit dumber than OpenAI. For example, when I asked it to ///Write a function that adds 4 uint256 numbers and returns the result
, it returned
function add4Uint256(uint256 a, uint256 b) internal pure returns (uint256) {
return a + b;
}
The result I got from https://openai-gpt.remixproject.org/ was correct:
function addNumbers(uint256 num1, uint256 num2, uint256 num3, uint256 num4) public pure returns (uint256) {
uint256 sum = num1 + num2 + num3 + num4;
return sum;
}
Another question here, I read the code of copilot-suggestion, which is implemented based on @xenova/transformers
which runs a tiny AI model in the browser, and it can use AI capabilities without any backend service, which is really cool. I know there will be two more methods in copilot-suggestion
-- useRemoteService, discardRemoteService
, which allow users to customize the remote service. Is this remote service also based on @xenova/transformers
? Or is it based on openai?
Suggestions
- Make 5 not affected by
Load & Activate copilot
, and get results fromopenaigpt
- Add a note to
Solidity copilot - Alpha
, specifying that the configuration only applies to 6, to avoid misleading users. - Add OPENAI configuration in the settings, with the following supported configuration items: apiKey, baseUrl, temperature, max_tokens, modal.
Users without an apiKey can apply for a free one from this project -- https://github.com/chatanywhere/GPT_API_free.
The free apiKey supports gpt-3.5-turbo, gpt-4. However, due to the high price of gpt-4, it is limited to 10 calls every 24 hours.
If you like my suggestions, I can help you implement them. @ryestew @yann300
@drafish thx for the constructive feedback.
openaigpt have nothing to do with Solidity copilot - Alpha. The plugin that corresponds to Solidity copilot - Alpha is copilot-suggestion, which can be found in the plugin list. (I wonder why it's not hidden like openaigpt, why let users can choose to active and deactivate it. Just wonder, not important).
That is right. The choice to activate and deactivate it matters on UI. Some users might not want any suggestion at all
I have a question here, why does 5, which is not implemented based on copilot-suggestion, have to be affected by the Load & Activate copilot?
This is more related to the UI experience. Having it enabled all the time could lead to overflowing the server with requests, that user do not event desire. Furthermore it uses the same parameters Maximum number of words to generate
and Temperature
for inference, even though the user hasn't activated 6
Another question is, is https://gpt-chat.remixproject.org/infer a model trained by the Remix team? I think it's not based on openai, because it is a bit dumber than OpenAI. For example, when I asked it to ///Write a function that adds 4 uint256 numbers and returns the result , it returned
Exact, that model is not based on openAI. It is trained internally
Another question here, I read the code of copilot-suggestion, which is implemented based on @xenova/transformers which runs a tiny AI model in the browser, and it can use AI capabilities without any backend service, which is really cool. I know there will be two more methods in copilot-suggestion -- useRemoteService, discardRemoteService, which allow users to customize the remote service. Is this remote service also based on @xenova/transformers? Or is it based on openai?
The remote service are rather based on much larger and increased performance model. The In-Browser model solutions provided by @xenova/transformers
only works for small models size. Note here that a user would have to download the model each time it is not cached on device, which is cumbersome.
This is more related to the UI experience. Having it enabled all the time could lead to overflowing the server with requests, that user do not event desire.
Oh I get it, 5 will probably be mis-triggered. I think you can put it in right-click menu like 2 and 3. The mis-triggered problem could be avoided in that way.
Furthermore it uses the same parameters
Maximum number of words to generate
andTemperature
for inference, even though the user hasn't activated 6
No, it not. I checked the code, these two parameters are not used in 5. Check the code here -- https://github.com/ethereum/remix-project/blob/master/libs/remix-ui/editor/src/lib/providers/inlineCompletionProvider.ts#L45
const {data} = await axios.post('https://gpt-chat.remixproject.org/infer', {comment: ask.replace('///', '')})
@STetsing