cortex-debug
cortex-debug copied to clipboard
WSL2 support for in Cortex-Debug. Discussion and Strategy
WSL2 is next on my list. I am trying to set up a Windows machine (my previous WSL2 seems like it got corrupted). I would like people to subscribe to this Issue and comment and help test it. There are several issues and workarounds already related to this and I would like to consolidate the discussion here.
First, comments are welcome on how it should work.
References: Issues #451, #402, #361, #66 and PR #328. There may be more.
Just my $0.02:
I would want to be able to work in either of those two setups:
-
Building takes place in the WSL2 subsystem directly. Here, the cross tools (arm-none-eabi-gdb etc.) are installed in the WSL2 subsystem. Since there doesn't seem to be any USB passthrough into WSL2, the debugger, which runs inside the WSL2, needs to establish contact with the probe that is connected to the Windows host using networking.
-
Building takes place in a docker container running within the WSL2 subsystem. You would typically be running Docker Desktop, which on Windows offers support for WSL2. In a nutshell, a Linux container with the cross tools runs within the WSL2 subsystem, and VScode runs on Windows in Dev Container mode. Again, since gdb runs inside the container, the network must be used to contact the debug probe.
The second approach may seem more complex, but it offers the advantage that the tool environment for a project can be shrink-wrapped in a ready-made container that is described concisely with a Dockerfile that can be maintained and versioned in its own git repository. Multiple such containers can be supported in parallel for different projects, without them getting in each other's way.
The problem of how GDB talks to the probe is a multifaceted one, that depends a lot on the actual probe. It will have to be a network connection of some sort. Some example scenarios are:
-
You have a probe with a network port, for example a SEGGER JLink PRO. In this case, there's probably no difference in the communication setup compared to the non-WSL2 case. With SEGGER, you install the Linux version of the SEGGER JLink software in the Dev Container (second case above) or WSL2 (first case above). No special software is needed on the Windows host.
-
You have a SEGGER JLink connected to the Windows host via USB. In this case you can use the JLink remote server included in the SEGGER JLink software installation. Thus, you have to install the Windows version on your host. You only need to run the remote server on the host. Within WSL2 or within the Dev Container, you need the Linux version of the SEGGER software. The JLink GDB server is run in WSL2 or the Dev Container, respectively, and it is configured to connect to the remote server via IP. See the SEGGER docs for that. This scenario can also be used with a SEGGER probe connected to a different computer on the network. If your 3rd party probe can be reflashed to act as a JLink probe, which includes numerous probes integrated on an evaluation board, this should work, too.
-
You run the GDB server on the Windows host, and have GDB inside WSL2 or the Dev Container contact this GDB server using IP. You might have to run the GDB server manually, unless a way can be found to start it from within the Dev Container. This should work even without a JLink based probe. In WSL2 or the Dev Container, no special probe-related software should be needed.
The first step would be to offer information on how to configure those scenarios properly, i.e. to describe what should work or what shouldn't. In this step it would be acceptable having to start some server manually, instead of having everything happening automatically when the debug button is pressed. It would also be acceptable to enter IP addresses or other settings manually depending on the local setup.
The second step would be to automate this stuff as much as possible.
Does this help you, @haneefdm ?
Does this help you, @haneefdm ?
@s13n Oh, helps a LOT, and thank you for such detail. And, a lot to think about over the weekend.
Would you agree with the following?
- Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with gdb source-paths.
- Hard constraint: the gdb-server should run wherever the HW/probe is. This, to accommodate all (most) gdb-servers. J-Link PRO's remote-server is an exception but in truth, the real probe SW is running where the probe is right?
I would be able to live with constraint 1, but perhaps others will disagree.
I think constraint 2 is a bit too restrictive. I would say GDB server should run either where the probe is, or where GDB is. Supporting a third location would be overkill. This also supports the JLink remote server. One instance where it matters is when you use a JLink with network interface, for example the JLink Pro. You want to run the JLink GDB server where the GDB is, otherwise you end up installing the SEGGER software on both the host and in the dev container.
Of course, with a USB connected probe, you have to install the respective software on the host.
I added Constraint 2 because of the physical interface (USB) the HW connects to. This is more of a constraint to me as I have to find a solution that works in that way. And, I am not saying supporting a 3rd location at all -- sorry, if my words implied that. To summarize, the gdb-server and the HW under debug are always on the same machine, attached at the hip, so to say.
It is also a generalization, beyond WIndows+WSL2.
Well, USB isn't the only physical interface that is supported by probes, it is only the most common one (by far). Why would you restrict yourself like that? What do you gain?
I am not restricting myself to it. It is a reality that I am stating. Once the server is not local, then it is remote. There is no in-between and I don't see WSL as an in-between thing. Maybe this is where I am wrong. I really don't care how the gdb-server connects to the HW.
We got the following scenarios
- gdb-server runs where VSCode/gdb runs. Great, we already do that
- gdb-server runs on another machine. Okay, not great. TCP port selection, launching of the server is not automatic and until recently, SWO was not supported. This works 100% but, people are not happy with this.
- gdb-server cannot run locally in some cases which is where the WSL situation comes in but it degenerates to scenario 2 above.
It is 2/3 that we are trying to address to make it look almost like # 1. Note that we NEVER talk to the gdb-server which is why we don't care where it lives. 90+% of the time TCP ports are used and sometimes serial ports. But someone has to launch it. We never had to worry if the gdb-server is using USB or some other communication mechanism.
The reason GDB is run where the compiling is done is because of pathnames embedded in the elf file. If these are not right, then breakpoints don't work, stack traces will not have references back to source code, etc.
One thing we did not talk about is where is VSCode running? In my head, gdb and VSCode are running on the same machine. One reason is that all communication with GDB happens over stdio, while gdb itself may talk to the gdb-server over some connection (local or remote). Another is that this is the model GDB has chosen and has worked for over 3 decades.
Btw, I the MB failed on the PC where I used to have WSL installed. It was my only PC. I ordered a PC and waiting. I normally do my testing in a VM but here that gets convoluted. Mac running Win in VM which in turn hosting WSL2. So, no experiments until next week.
Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.
We are talking about a scenario where VS Code runs on Windows in remote mode. The remote OS is either the WSL2 subsystem directly, or a docker container running within WSL2. The VS Code Server runs there, the source code resides there, and the cross tools including GDB run there.
The WSL2 case has the feature that you can launch Windows software from within the WSL2 subsystem. AFAIK you can't do this from within a docker container, but maybe the fact that VS Code runs on Windows provides a way to start some process there, even though the actual debugging takes place in the remote OS under control of the VS Code Server. I have no idea if VS Code helps you with that.
I am relatively new with the dev container way of working with VS Code, but I already prefer working in this way, due to the simplicity of maintaining a build environment that is specific to a project. You can also run the same container on a remote machine, for example your build server, rather than locally within WSL2, and you shouldn't notice much of a difference.
Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.
Okay, that is a very different model. I was aware of that. I will look into it while I wait, but if you see the repo for it, it has 802 issues, no commits since 5/11/2021. It is a package containing 3 extensions. Last commit was 4 months ago in those repos.
Some of it is the inverse model of what I was thinking.
One thing that is important to me (selfishly) is how to debug this extension itself. Without that it would be horrible.
I worked a bit on the MS C++ debug adapter and it was very difficult to do any cross-platform debugging of the extension. It was like a one-man circus show to juggle multiple VSCodes running on different machines. Both VSCode and Visual Studio were needed. I can't even explain.
@haneefdm, I think @s13n is leading you down the correct path here. I'm not familiar enough with VS Code's internal workings to tell you what is technically possible to solve the problem, but I can offer my use case and viewpoint.
I'm currently working on setting up a development environment at work for using Zephyr. Our build pipeline will be all Linux based tools, but our corporate issued computers are all Windows machines. The setup differences for Zephyr between Windows and Unix based systems is painful. Ideally, I could just define a Docker image that has all the necessary build and dev tools in it and use it everywhere instead of depending on other devs to properly set up a bunch of prerequisite software. If I need to have them install a couple things like probe drivers, I can live with that. It's better than maintaining documentation on installing an entire dev environment and manually setting path variables in a locked down Windows machine.
Something to consider is that even though WSL2 and Windows are running on the same physical machine, for the purpose of this issue they are effectively two different systems. WSL2 is a lightweight VM and if it properly supported USB passthrough the technical implementation might not be so complex. However, WSL2 does not currently support USB passthrough.
Microsofts docs on Remote Development and Codespaces might help explain remote workspaces better.
Good input here!
Indeed, the configuration I'm interested in is:
- the host is a Windows machine
- the whole build system including compiler, source code, and gdb is in the WSL2 container. As @jdswensen we have a Zephyr based build system.
- the VSC runs in the remote mode, i.e., the VSC UI is on the host but all calls that involve system operations are done on the container. What is nice about the mode is that you shouldn't need any changes to the extension code. We're developing and extension and all functions work fine in the remote mode.
- the SEGGER tools are installed on the host;
As @haneefdm mentioned you can debug from WSL2 even today but the limitations are:
- you need to start the JLink GDB server manually
- you need to figure out the IP number of the host manually
This could be easily solved since, as @s13n mentions, you can start any windows process from within WSL2. From what I have checked this is a modification in the WSL2 kernel - when you try to execute a binary with *.exe the relevant syscall is captured and passed to the host. This mean that we can launch the gdb server on the host and configure the gdb to connect to the correct remote (host) port.
The whole thing is essentially a workaround on the lack of USB passthrough that won't come soon, if ever. Since this can't be done with Docker perhaps we should tackle these issues separately?
Thank you all. My new PC is finally here. Setting it up right now and then I will be able to try stuff out myself on what works well.
I am sure I can figure it out but do you know, how the client (docker or WSL) env can know what the host IP is? I was told to look in /etc/resolve.conf but that doesn't look right at least for WSL2. Especially, if the client is in bridged mode or the host is using VPN
When using Docker Desktop for Windows, you can have the host's IP address resolved from within the container by using host.docker.internal as the DNS name.
See https://docs.docker.com/desktop/windows/networking/
Oh, that is super nice with Docker. Thanks @s13n
@haneefdm I think looking into /etc/resolv.conf is the correct way. It does work for me:

@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the /etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.
@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the
/etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.
Don't see much of a difference, i.e., I have the same contents in the resolv.conf file.
1. Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with [gdb source-paths](https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html).
I "vote" against that constraint. Just allow setting source-path directly in the launch configuration and everything is fine.
Technically it is not a hard constraint for me. Of course, you can use source paths just like you can today. It has to do with the client-server VSCode architecture. This extension and the GDB are attached at the hip. That is the true hard constraint for me.

https://code.visualstudio.com/docs/remote/remote-overview.
As things stand, in one incarnation, Cortex-Debug would be classified in the "Remote OS" box (WSL, Docker, etc.) while the GUI itself would be in the "Local OS" box. That picture is not totally applicable to what we are doing, btw. Especially for where the 'Application is running'. You can also see where the Source code box is.
That little green box on the bottom right of the picture above is GDB.
I don't even know if that arch. is feasible for me but it is a start as a lot of groundwork has already been laid out.
That only applies if your use VS Code Server - and as those binaries are non-free I don't use that. That's likely the reason why I commonly think of "Local OS with source and GDB", attached to "GDB Server with the process". This "simple" scenario also works quite fine since years for most setups. Note: when running on Windows MSYS2 provides a gdb-multiarch.exe - so the "Debugger" part is solved (objdump not).
and as those binaries are non-free I don't use that.
Which binaries? The VSCode server(s)? And free as in Open Source or some other meaning (costs)?
In VSCode's mind 'Local OS' means the host running the UI. To me, 'Local OS' also meant "Local OS with source and GDB" but I had to teach myself a different way of thinking. VSCode is my host so, playing by the host's rules and thus terminology. I have to back and edit all my comments to make sure.
Yes, the vscode servers. It is all about freedom, not price. This actually leads to vscode server not running everywhere I'd like it to for remote debugging (I'm not sure if it actually works in every distro one can install in wsl). ... actually the client part in vscode is also closed source and the "gratis" extensions needed to work for that: are only licensed to be used on "Visual Studio Code binaries" (= the one provided by MS where you are even not allowed to distribute own copies.... - and even from a practical view: those binaries are not available for all GNU/Linux distros where vscode actually runs - which is the reason that I only use binaries that are as free as vscode [the main source], nowadays mainly VSCodium).
Note: in VSCode's mind "Local OS" is also something that run's vscode in a browser: it is actually the UI.
My team and I have been using the mentioned setup* for ½ year now, and it's great. (* vscode running in windows + 'remote' plugin. compiler + jlinkgdbserver running on linux side) We use a Docker-setup, that uses the excact docker-container, that the buildserver uses (test what you fly, fly what you test) We also use a Ubuntu-based WSL2 setup, with compilers manually installed. (This is mainly a question about ergonomics about mapped drives, network firewalling by McAfee, etc )
I'm on Segger JLink tools, so I can run the segger-jlink-gdb-server in linux, and I specify an ip (on my windows host) where I have a Jlink-remote-server running. This is NOT the same as running the gdb-server on windows.
"type": "cortex-debug",
"servertype": "jlink",
"serverpath": "JLinkGDBServerCLExe", <--- this *is* the linux executable
"ipAddress": "172.20.15.135", <-- connects to "jlink-remote-server" on this machine
Annoyances:
I have to write the IP directly. I cant write "host.docker.internal". Tried some ways to get it indirectly via something like "dig +short host.docker.internal" but so far no luck. ${env:HOST_IP} works, and I dont mind doing
export HOST_IP=$(dig +short host.docker.internal) in a terminal when my IP changes on the windows side, but that environment does not affect the VSCode environment, so it does not work. Any ideas greatly appreciated.
This is really minor: On Ubuntu-based WSL2 (the easiest to install , because it's in official Windows Store), the gdb debugger is called arm-none-eabi-gdb as expected by cortex-debug. On fedora (company default for docker-containers for some reason) one installs gdb-multiarch, and the command is just 'gdb' so I need a "gdbPath": "gdb". I hope fedora and debian converges at some point, so I can get rid of this difference. (Some team-members makes a symlink for arm-none-eabi-gdb, others live with a dirty file in git.)
/T
@andyinno Can you try the tools from the command line and use gdb from the command line as well. You see the exact command-line options used in the Debug Console.
Btw, I have to remove your comment as this thread is not for issue submissions or asking for help. Please re-open a new issue and someone might come along and help you. Once you submit a new issue, I will remove your comment from here.
If you want to tell us how to implement the remote/WSL debug then this is the right place. You are doing something this tool was not designed for -- if it works, great.
Maybe this: https://www.xda-developers.com/wsl-connect-usb-devices-windows-11/ https://www.elevenforum.com/t/connecting-usb-devices-to-wsl.2514/ will change some of the scope. Not tested yet so I haven't verified that it actually works to have the probes connected directly to WSL2 with USB.
@lagerholm Thank you so much for the info. This is great.
Hello, I tried to use Cortex-Debug in WSL2 using the usbipd-win tool to connect host-connected J-Link to WSL2 environment. J-Link connection looks fine (at first look, it seems it's required Jlink drivers to be installed on both host and wsl2 sides and aligned to the same version), but extension doesn't stop at main. launcher.json looks like the following:
"name": "DVC TopRow Emerald Inventory",
"type": "cortex-debug",
"request": "launch",
"cwd": "${workspaceFolder}",
"armToolchainPath": "/opt/gcc-arm-none-eabi/bin",
"executable": "path/to/elf",
"serverpath": "/opt/SEGGER/JLink/JLinkGDBServerCLExe",
"servertype": "jlink",
"device": "MK10DX256xxx7",
"interface": "jtag",
"serialNumber": "proper serial number",
"runToMain": true,
"stopAtEntry": true,
"svdFile": "path/to/svd"
(at first look, it seems it's required Jlink drivers to be installed on both host and wsl2 sides and aligned to the same version
That makes me suspicious. Are you sure you don't get one of the windows-side tools called by accident? If the USB is properly visible in the WSL, it should work 100% on the linux toolchain? (all imho, of cause)
Would like to test it out soon!
That makes me suspicious. Are you sure you don't get one of the windows-side tools called by accident?
I thought the same, I checked, but it wasn't.
If the USB is properly visible in the WSL, it should work 100% on the linux toolchain? (all imho, of cause)
Same, but actually didn't. Can't say why.