nvim-dap-python
nvim-dap-python copied to clipboard
Execute test runners remotely in docker containers
I would like test_method() to execute the test runner in running a docker container and attach the debugger. I think that is a common workflow and it would be nice to have build in support for this.
A possible implementation would be to execute a docker command such as
CONTAINER_ID=$(docker ps --filter "name=$CONTAINER_NAME" --quiet)
docker exec $CONTAINER_ID $PYTHON_PATH -m debugpy --listen 0.0.0.0:$PORT --wait-for-client -m pytest -s $TEST_COMMAND
and then attach normally via DAP remote adapter. There are some other complications such as mapping workspace paths between host and container.
Do you have a suggestion for how the user interface would look like? It's probably a bit related to the discussion over here.
Yes that seems like the same request except I also want to use it in combination with individual unit test parsing from this package. But handling docker containers in general might be out of scope here because it would be nice to have a solution also outside of python.
I would suggest a similar interface to vimspector, see https://puremourning.github.io/vimspector/configuration.html#docker-example. Basically configure docker container name, port and python path in the adapter. Configure runtime command (pytest -s ...) and workspace mapping in configuration. When using a function like test_method() the runtime command should be handled by automatically.
Okay I think I've some ideas how to do this, But I got some more questions to ensure I've the full picture.
- What's usually your workflow to start the container initially?
- Do I understand it right that you're using the container mostly for the python interpreter?
- Do you know how the workflow would look like for a compiled language? Would it require rebuilding the container image on each source change?
Cool thanks for your consideration. To answer your questions:
- I just start the docker container manually (
docker-compose ...). The container contains a Python environment with all the dependencies to run the application. The source code is mounted as a volume in the container, but the path might be different to the host system. - There are several reasons to use a container, but the biggest one is the Python environment itself, yes. Think developing on MacOS but testing applications on the same Linux images used in production.
- I think you would only need to rebuild the container image if the dependencies change or you want to distribute your application via the image. Otherwise I would imagine you just recompile the application in the container and run.
Could you point to a repo or have some Dockerfile/docker compose files as a minimal example that could be used to test this?
Vimspector has a python example that includes a Docker file for remote debugging. See https://github.com/puremourning/vimspector/tree/master/support/test/python/simple_python. In the example the source file is added via the Dockerfile rather than a shared volume.
Wading in on this. Having to work with multiple languages, I'm forever working with Docker. Having the native ability to pass a test to docker in the way @stephan-hesselmann-by suggests would be a very useful feature - not just to his repo but to nvim-dap.
I'm going to cobble something together this week as I explore Python, Pytest and Docker with nvim-dap and will report back.
Okay so I have this playing nicely with vim-test...I set my breakpoints, run the test and then type DebugPy 0.0.0.0 3000 which launches the debug session nicely.
For reference, DebugPy command is:
vim.cmd 'command! -complete=file -nargs=* DebugPy lua require\'plugins.dap\'.attach_python_debugger({<f-args>})'
My vim-test Pytest strategy is:
vim.g['test#python#pytest#executable'] = 'docker-compose -f "./docker-compose.yml" exec -T -w /usr/src/app web python -m debugpy --listen 0.0.0.0:3000 --wait-for-client -m pytest'
where web is the name of my web service in my docker-compose.yml file. Also in that file, I've exposed port 3000.
I also use floaterm and I remember the -T in the command stopped some weird tty input error.
Next stop will be writing a small Lua function to set the strategy depending on if we're debugging or not.
I've incorporated a neat way of getting the Python method to be tested (using a Vim-test helper) and passing it to Docker to then wait for debug commands. I use jobstart to initiate Docker in the background before beginning. If anyone is interested, my single Lua file is here.
Hopping in here because I don't know if you have seen this but https://github.com/rcarriga/vim-ultest exists and claims support with nvim-dap. I haven't yet experimented with it yet because I lack a vim-test setup but since you seem versed in vim-test it might simplify even further/have better ui for neovim.
Great shout and I randomly saw that too when I updated packer a few days ago. It's on my to-do list to play with so I'll reply back if it may be of use.
I've played with vim-ultest and it works nicely with nvim-dap but again, connecting remotely seems to be cumbersome
I stumbled upon this issue when looking for a solution to run the container before attaching the debugger. I didn't understand the exact requirement for this issue but if you are looking for what I was looking for, you can try this task runner plugin: overseer.nvim
It has integration with nvim-dap and also supports .vscode/tasks.json and .vscode/launch.json. Main point is, it can handle preLaunchTask configured in .vscode/launch.json.
Setting up overseer.nvim is straight forward, you can explore this setup guide
Once done with setup, you can create launch.json and tasks.json like below:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach remote_python",
"type": "python",
"request": "attach",
"preLaunchTask": "Start server debug mode",
"connect": {
"host": "0.0.0.0",
"port": 5678
},
"logToFile": true,
"cwd": "${workspaceFolder}",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
]
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "Start server debug mode",
"command": "docker-compose -f docker-compose-local-debug.yml up -d; sleep 1; lsof -i:5678 | grep -m 1 'dock'; do : sleep 0.2 ; done; sleep 1"
}
]
}
You don't explicitly need to create a launch.json, you can even add preLaunchTask attribute in your remote debug config defined for python.
This is it, now whenever you trigger 'dap'.continue() and select the remote debug config, overseer will automatically run the task defined in preLaunchTask and then will attach the debugger.
The task command might be a hacky solution but it works for me. All I am doing is running the docker container with entrypoint as python3 -m debugpy --listen 0.0.0.0:5678 --wait-for-client src/main.py which run the container in debug mode exposing port 5678. Then I am just running a while loop to check if the container is up and the port 5678 is occupied by docker process.
Given there are some solutions posted here in this issue I'm closing this.