r2ai windows version doesn't connect to AI api
Hi, tried windows version 1.1.2 and It seems there is some kind of network connection issue connecting to the api model. Tried with Mistral with model codestral-latest, Anthropic with claude-3-sonnet-20240229 and Ollma local on PC with the same result.
Using mistral : `[0x00401180]> r2ai -a analizza questa funzione [r2cmd]>
[0x00401180]> aaaa aaaa
ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (3/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (4/10) after failure...
` ** Same error using ollama**:
`[0x00401180]> r2ai -e api=ollama [0x00401180]> r2ai -e host=http://localhost/ ERROR: variable 'r2ai.host' not found [0x00401180]> r2ai -e port=11434 ERROR: variable 'r2ai.port' not found [0x00401180]> r2ai -a analizza questa funzione [r2cmd]>
[0x00401180]> aaaa aaaa
ERROR: Cannot connect to localhost:11434 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (3/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (4/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (5/10) after failure...`
Using Anthropic too : `[0x00401180]> r2ai -e api=anthropic [0x00401180]> r2ai -e model=claude-3-7-sonnet-20250219 [0x00401180]> r2ai -e prompt=Spiega la funzione corrente in modo dettagliato, evidenzia vulnerabilitàe suggerisci refactoring. [0x00401180]> r2ai -e cmds=pdc [0x00401180]> r2ai -a analizza questa funzione [r2cmd]>
[0x00401180]> aaaa aaaa
ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (3/10) after failure...`
I've checked network and tried to invoke api through nodejs client and it works fine.
Thank you in advance.
Do you have “curl” installd in your windows? Maybe i may fallback with powershell oneliners—pancakeOn 8 Oct 2025, at 06:51, Vincenzo Amoruso @.*> wrote:vamoruso created an issue (radareorg/r2ai#207)
Hi, tried windows version 1.1.2 and It seems there is some kind of network connection issue connecting to the api model.
Tried with Mistral with model codestral-latest, Anthropic with claude-3-sonnet-20240229 and Ollma local on PC with the same result.
Using mistral :
[0x00401180]> r2ai -a analizza questa funzione [r2cmd]> [0x00401180]> aaaa aaaa ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (3/10) after failure... ERROR: Cannot connect to api.mistral.ai:443 INFO: Retrying request (4/10) after failure...
** Same error using ollama:
[0x00401180]> r2ai -e api=ollama [0x00401180]> r2ai -e host=http://localhost/ ERROR: variable 'r2ai.host' not found [0x00401180]> r2ai -e port=11434 ERROR: variable 'r2ai.port' not found [0x00401180]> r2ai -a analizza questa funzione [r2cmd]> [0x00401180]> aaaa aaaa ERROR: Cannot connect to localhost:11434 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (3/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (4/10) after failure... ERROR: Cannot connect to localhost:11434 INFO: Retrying request (5/10) after failure...
Using Anthropic too : [0x00401180]> r2ai -e api=anthropic [0x00401180]> r2ai -e model=claude-3-7-sonnet-20250219 [0x00401180]> r2ai -e prompt=Spiega la funzione corrente in modo dettagliato, evidenzia vulnerabilitàe suggerisci refactoring. [0x00401180]> r2ai -e cmds=pdc [0x00401180]> r2ai -a analizza questa funzione [r2cmd]> [0x00401180]> aaaa aaaa ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (1/10) after failure... ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (2/10) after failure... ERROR: Cannot connect to api.anthropic.com:443 INFO: Retrying request (3/10) after failure...
I've checked network and tried to invoke api through nodejs client and it works fine.
Thank you in advance.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
Yes, i've curl installed on my pc .
these are the results : `C:>curl https://api.mistral.ai --ssl-no-revoke { "message":"no Route matched with those values", "request_id":"40131d19d4795b61ebce1c2b78f91b08" } C:>curl https://api.anthropic.com --ssl-no-revoke curl: (6) Could not resolve host: api.anthropic.com
C:\Users\vincw>curl http://localhost:11434 --ssl-no-revoke Ollama is running C:>`
i've tried to change r2ai.http.backend parameter with system, libcurl etc..etc with no success. :-(
@radare thx in advance.
After countless attempts and a deep dive into the C source code, I tried using CURL by downloading and installing it on Windows. I placed curl.exe and libcurl-x64.dll inside the bin folder of Radare2, and also set the environment variable R2_CURL=1 to enable CURL support.
r2 C:/tmp/a.exe
aa
alf~main
pd 30
.......
.....
[0x00401180]> r2ai -e api=ollama
[0x00401180]> r2ai -e model=codellama:latest
[0x00401180]> r2ai -d Explain the main function.
I can now see incoming requests from the Ollama server on my Windows PC. The JSON request sent from r2ai to Ollama is correctly formatted — I confirmed this by capturing the traffic using Fiddler.
{"model":"codellama:latest","stream":false,"max_completion_tokens":4096, "messages": [{"role":"system","content":"You are a reverse engineer. The user is reversing a binary, using radare2. The user will ask questions about the binary and you will respond with the answer to the best of your ability."},{"role":"user","content":"Explain the main function.\\n[BEGIN]\\n// callconv: eax cdecl (stack);\\nint entry0 (int stack) {\\n loc_0x00401180:\\n push (0x404ba4)\\n sub.VB40032.DLL_ThunRTMain ()\\n byte [eax] += al\\n eax += dword [eax]\\n byte [eax] += al\\n if (((unsigned) v) < 0) goto 0x4011a8 // likely\\n goto loc_0x00401192;\\n loc_0x004011a8:\\n // CODE XREF from entry0 @ 0x401190(x)\\n invalid\\n return eax;\\n}\\n\\n[END]\\n"}]}
and correct JSON answer is sent back from Ollama to r2ai. Confirmed by Fiddler.
{"model":"codellama:latest","created_at":"2025-10-19T15:35:19.3660911Z","message":{"role":"assistant","content":"\nThe main function is called entry0. It takes an integer argument stackand returns an integer. The function can be considered the entry point of the program, as it is the first function that is executed when the program starts running.\n\nThe function consists of several instructions, but the most important ones are:\n\n*push 0x404ba4: This instruction pushes the value 0x404ba4 onto the stack. The exact meaning of this value depends on the context in which it is used. It could be a return address or a pointer to some data.\n* sub.VB40032.DLL_ThunRTMain (): This instruction calls the function DLL_ThunRTMainfrom a DLL namedVB40032. The exact meaning of this function depends on the context in which it is used, but it could be related to some initialization or shutdown process.\n* byte [eax] += al: This instruction adds the value stored at the memory location pointed to by eaxto the value stored in registeral. The exact meaning of this instruction depends on the context in which it is used, but it could be related to some data processing or string manipulation.\n* if (((unsigned) v) \u003c 0) goto loc_0x00401192;: This instruction checks if the value stored in register v(which is a signed integer) is negative. If it is, the program jumps to the location pointed to byloc_0x00401192.\n* goto loc_0x004011a8;: This instruction unconditionally jumps to the location pointed to by loc_0x004011a8.\n\nOverall, this function appears to be a complex routine that performs some data processing or string manipulation, and then either returns a value or calls another function. The exact meaning of the code depends on the context in which it is used, but it seems likely that it plays an important role in the overall functionality of the program."},"done":true,"done_reason":"stop","total_duration":63118767400,"load_duration":26934500,"prompt_eval_count":272,"prompt_eval_duration":1582771100,"eval_count":465,"eval_duration":60385621800}
### but in r2/r2ai cmd console the message printed is null
[0x00401180]> r2ai -e api=ollama
[0x00401180]> r2ai -e model=codellama:latest
[0x00401180]> r2ai -d Explain the main function.
**### (null)**
Please help @radare
Thank you in advance.
can you try this pr? https://github.com/radareorg/r2ai/pull/209
its a fallback to powershell when curl doesnt exist. i think the problem is a little far from here. also, i would recommend to try gemma3:12b or gpt-oss:20b instead
also, the way to use it is "r2ai Explain the main function." theres no need to pass any flag to respond a question
hi, @trufae The network communication issue with Ollama has been resolved by using R2_CURL env var and there are no longer any network issues . About parameter, without -d flag the json request (as i saw in Fiddler proxy) doesn't contain the code of main and model respond in a generic way.
Regarding model i've already tried many of them without success, including gemma3:12b and gpt-oss:20b see the screenshots below
As as i wrote above and can see from screenshot the response of r2ai is (null) even with complete response of Ollama. Ollama windows version is 0.12.6
Thank you in advance.
Thank you for today conference @radare . I want to notice you that using version 1.2.0 r2ai doesn't work anymore on windows 😢. Thank you in advance. Pls help.
i just tagged 1.2.2 with some last bugs and features i found after 1.2.0, thanks for spotting that bug! let me know if it works on windows now. its always the last OS i take care of, because i want to get r2ai code working stable before testing all platforms, so im happy you report all that.
can you confirm the problem is solved?