LocalAI
LocalAI copied to clipboard
LocalAI P2P connection failure between Linux server and Windows 11 WSL2 instances
LocalAI version: localai/localai:v2.20.1-cublas-cuda12-ffmpeg-core
Environment, CPU architecture, OS, and Version:
- Linux server: x86 GPU 2060 12 GB, Docker Ubuntu 22.04
- Windows 11 PC: x86 WSL2, GPU 4070 12 GB, Docker Desktop (with WSL2) Ubuntu image
Describe the bug Unable to establish P2P connection between LocalAI instances running on a Linux server and a Windows 11 PC with WSL2. The server UI shows 0/0 nodes connected, despite both instances being on the same LAN and able to communicate otherwise
To Reproduce
- Start the LocalAI container on the Windows 11 PC with WSL2 using the provided docker-compose configuration
- Start the LocalAI container on the Linux server using the provided docker-compose configuration
- Open the LocalAI server UI
- Observe that the UI shows 0/0 nodes connected
Expected behavior The LocalAI instances should establish a P2P connection, and the server UI should show at least one connected node
Logs I don't think it's usefull to attach all logs but they mainly say"found node" or "successfully announced"
Additional context
- Both machines are on the same LAN and can communicate with each other for other applications
- CUDA and NVCC are running correctly on both systems
- The
llama-cpp-args
parameter is not recognized when trying to set a specific port - There might be issues with WSL network port forwarding, but it's unclear as the port can't be set manually
- Both instances are using the same token for authentication
- The PC instance is using the
worker p2p-llama-cpp-rpc
command, while the server instance is usingwhisper-base
with the--p2p
flag