wg
wg copied to clipboard
One step setup
Embedded development involves cross compilation and remote debugging; the Rust toolchain doesn't
provide all the tools so you have to install external ones like: a cross linker (e.g.
arm-none-eabi-ld
), a cross debugger (e.g. arm-none-eabi-gdb
), some tool to communicate with the
remote target (e.g. OpenOCD
), etc.
It would be great if the user didn't have to figure out where to get all those tools and install them manually.
There are a few upcoming changes that might affect this situation.
- A
lld
binary will be shipped with the Rust toolchain. cf. PR rust-lang/rust#48125
lld
is a multi-arch linker and it can be used to link ARM Cortex-M binaries so once this is in
place user won't need to install arm-none-eabi-ld
.
I don't think lld
supports any of the other embedded targets (MSP430, AVR or RISCV) though
- A
lldb
binary will shipped with the Rust toolchain. cf. issue rust-lang/rust#48168
lldb
is a multi-arch debugger. I personally could never get it to work with OpenOCD; I never found
an equivalent to the monitor
and load
commands which are used to flash the binary into the
remote target. lldb
can debug ARM Cortex-M programs just fine; I have used it debug a emulated ARM
Cortex-M microcontroller (QEMU).
Maybe someone had more luck with it and can share their experience?
One way to address this setup problem could be to provide a installer (script) like https://rustup.rs/ does but for each target architecture.
cc @jcsoo
I never found an equivalent to the monitor and load commands which are used to flash the binary into the remote target.
The monitor
command is process plugin packet monitor
in lldb
, and I'm afraid it's rather user unfriendly:
(lldb) process plugin packet monitor help
packet: qRcmd,68656c70
response: O54686520666f6c6c6f77696e67206d6f6e69746f7220636f6d6d616e64732061726520737570706f727465643a0a
The flash functionality of the gdb stub is not supported.
IMO, it's a good idea to extend lldb to handle these use cases instead of giving up on it.
I haven't had a chance to try LLD / LLDB yet - for now, do you have to build LLVM from scratch to install them? What version should we be using to be close to what Rust is using? Or can we get them by doing a full Rust build?
Bobbin-CLI (https://github.com/bobbin-rs/bobbin-cli/) currently knows about many debuggers and flash loaders, and the documentation currently has pointers to download sites as well as some documentation about how to build and install them: https://github.com/bobbin-rs/bobbin-cli/blob/master/FIRMWARE.md
There's also a "bobbin check" command that scans for all tools that it knows about and displays version information:
mbp15:~ jcsoo$ bobbin check
Bobbin 0.8.0
Rust 1.25.0-nightly (27a046e93 2018-02-18)
Cargo 0.26.0-nightly (1d6dfea44 2018-01-26)
Xargo 0.3.10
GCC 5.4.1 20160919 (release) [ARM/embedded-5-branch revision 240496]
OpenOCD 0.10.0+dev-g7c2dc13 (2017-02-12-10:20)
JLink V6.19e (Compiled Sep 1 2017 18:27:51)
Bossa 1.7.0
Teensy 2.1
dfu-util 0.9
This could conceivably be extended to check for outdated versions and to add direct documentation links. Bobbin-CLI also has a database of USB Vendor / Product IDs that could be used to auto-detect devices with missing tools and point users to documentation and download links.
I've considered adding rustup
style functionality but I don't think it's practical because of the third-party nature of these tools. I think the effort is better spent in building rust-native tools and/or building support directly into Bobbin-CLI. For instance, Teensy is a simple USB HID protocol, dfu-util is a reasonably straightforward USB protocol, and Bossa uses a serial protocol.
Bobbin-CLI also supports dumping files onto USB MSD volumes, which covers many ST-Link and DAPLink debuggers, and it might not be too difficult to teach it enough of the GDB remote protocol to handle flashing to anything that exposes a GDB server (JLink, and Black Magic Probe). In the long run there is https://github.com/mbedmicro/FlashAlgo which is collecting flash algorithms for a wide variety of targets.
So, it could conceivably have native flash loading support for a fairly broad range of boards and debuggers, limiting the OpenOCD dependency to people that need remote debugging or need to support devices that aren't covered.
So is there interest in making an openocd crate so that it can be installed via cargo? I think this would be a requirement for useful cargo flash
and cargo debug
commands.
Repackaging completely unrelated C applications in cargo seems wrong. The gcc crate doesn't package gcc...
@jcsoo
I think the effort is better spent in building rust-native tools and/or building support directly into Bobbin-CLI. For instance, Teensy is a simple USB HID protocol, dfu-util is a reasonably straightforward USB protocol, and Bossa uses a serial protocol.
A few years ago, I built a simple tool to replace Bossa for my own use, to upload to the SAM3X8E. Here: https://github.com/hannobraun/embedded/tree/master/uploader
I don't know how much is missing to make it useful, but I recall successfully using it to upload my program at the time. Maybe it can serve as a starting point or reference for your own efforts.
@whitequark gcc isn't essential to the toolchain, and would require a dozen different crates for each architecture. Relying on distro package managers can be an option for Linux/established computer architectures. If we want a seamless experience for non-linux and archs like riscv, what do you suggest as an alternative? Rewriting openocd in rust is probably not a realistic goal till September
@dvc94ch If you want a seamless experience? Rely on system package managers for Linux and OS X, and provide prebuilt Windows installers.
Even if you did package openocd as a crate, it depends on, at least, MinGW, MSYS and libusb1 to build on Windows, and these dependencies ensure that it's not going to be installable as simple as cargo install openocd
. That's forgetting about any adapters more complex than FTDI ones...
I think OpenOCD is a well solved problem and I don't think we need to do much there. What's more useful, I think, is helping people understand the differences between the normal C development process for any given platform and the 'normal' Rust development process. People could effectively then apply those differences to almost any C example/tutorial they find for any random board we've never even heard of.
I think what I'm saying is everything after the ELF is a platform problem not a language problem and I think we should stick to language problems.
When Rust has LLD and LLDB integration, does anyone know whether Rust will build, ship, and install the standalone binaries also? And what about other utilities? objcopy
will be needed to convert to hex format for many flash loaders, and I use objdump
+ size
very often.
I would hate to do all this work and still need the user to install the full GCC or LLVM toolchain just to get access to those utilities.
Could they be packaged in a separate rustup rust-tools
component?
objdump + size
They're from GNU Binutils, which is distinct to GCC and LLVM. I agree Rust versions would be easier to ship, but I'm OK with depending upon binutils as-is.
They're from GNU Binutils, which is distinct to GCC and LLVM.
There are llvm-objdump and llvm-size since sometime late in 3.x cycle.
I'm OK with depending upon binutils as-is.
binutils are a really gross dependency to carry around. First, you get an entire second machine layer with its own independent bugs and deficiencies. There's no guarantee that the disassembler in binutils will know about all the same relocations as LLVM. (I've hit this.) Second, they're a pain in the ass to build on Windows as they require msys or cygwin.
When Rust has LLD and LLDB integration, does anyone know whether Rust will build, ship, and install the standalone binaries also?
That's what rust-lang/rust#48125 does. A LLD binary will be shipped with the Rust toolchain; it will be somewhere in $(rustc --print sysroot)
. There's some magic that will append the path to LLD to the PATH env variable when rustc invokes the linker so the user won't have to do anything to their environment.
I don't know what's the LLDB plan with respect to PATH.
There's a Rust tool that's basically a objdump
re-implementation. Unfortunately I don't recall its name but, iirc, it's based on the ~~Keystone~~ Capstone (dis)assembler.
EDIT: Err, not Keystone; Capstone I think
There's a Rust tool that's basically a objdump re-implementation.
It's called cargo-sym but it doesn't seem like they are working on it anymore (no activity in the last 11 months).
I really don't think we should use anything besides llvm-objdump
if we want to support lesser-used architectures. That's just duplicating work and introducing more opportunity for obscure errors.
Ask me about that time I found out that OR1K assemblers in LLVM and binutils didn't match up, and then that LLVM and ld.bfd didn't agree on whether some DWARF relocation is absolute or relative...
I tested llvm-objdump
locally and it worked fine; I had to pass -triple=thumbv7m-none-eabi
to it to get the correct diassembly though. I also didn't see any demangling option but piping the output through c++filt
did the trick. llvm-size
worked without problems.
Given that lld
is being added to the Rust toolchain I think we could push (submit an RFC) for shipping both llvm-size
and llvm-objdump
with the toolchain. That would eliminate the need for installing arm-none-eabi-binutils
when targetting ARM Cortex-M and would do the same for the other targets once they get LLD support.
Once llvm-objdump
and llvm-size
are in place we could publish some thin wrappers as Cargo subcommands (e.g. cargo-objdump
) that take care of passing -triple
, demangling the output, specifying the full path to the binary, etc.
Can we get llvm-objcopy
included also? It's needed for generating .bin or .hex files from the ELF output. All of the flash loaders (Bossa, Teensy, dfu-util) as well as ST-Link and DAPLink on-board drag-and-drop require one of those two formats.
While you're at it, also include:
-
llvm-nm
-
llvm-cxxfilt
-
llvm-cov
llvm-objdump and llvm-size since sometime late in 3.x cycle.
Oh, cool!
binutils are a really gross dependency to carry around
I totally agree that if there are LLVM equivalents then that's a much better approach, and an easier way to get all our architectures supported properly.
thin wrappers ... cargo-objdump
Oh man, that would really improve my workflow.
Based on the discussion so far I have created two work items:
- #50 Ship llvm binutils with the Rust toolchain
- #51
cargo-objdump
,cargo-size
, etc.
Help wanted!
@therealprof had some success with lldb and stlink. [0]
monitor and load can probably be implemented as python scripts. If someone gets it working with openocd I can probably port lldb to riscv. According to [1] it doesn't seem too involved to get a basic working debugger running and could probably be implemented in a week or two. (gdb keeps segfaulting on me, so I'd be interested in trying a different debugger)
- msp430 triple to LLDB
- Describe msp430 registers in a python file
- Fix DWARF parsing errors in LLDB
- Added msp430 breakpoint opcode to LLDB
- Implemented msp430 ABI
- Hookup disassembler
[0] https://www.eggers-club.de/blog/2017/07/01/embedded-debugging-with-lldb-sure/ [1] https://llvm.org/devmtg/2016-03/Tutorials/LLDB-tutorial.pdf
@dvc94ch The "problem" with lldb is its funny interpretation of the "standard" gdbserver protocol. It'll pretty much only work reliably with it's own counter implementation lldb-server
but none of the other available tools (e.g. OpenOCD or Blackmagic Probe) with the only notable exception being st-util
from the STLink suite which works to a very basic extend.
I toyed around with lldb
for a bit but the major problem I found was that lldb
stuffs some important gdb
commands into a NUL-terminated string instead of looking at and keeping their real size which means that you cannot put arbitrary binary data in them thus preventing a lot of stuff from working. Changing that seems to be a major effort, especially since lldb uses a rather antiquated development process with a more not so really open community.
lldb uses a rather antiquated development process with a more not so really open community
@therealprof mmh, are not the same people working on it that work on llvm? I got a small patch upstreamed, didn't seem that annoying. I guess it depends on who is actually in charge of the code you want to change? anyway that's a shame...
Ooh, actually there seems to be movement on the gdbserver protocol support front since I last checked http://lists.llvm.org/pipermail/lldb-dev/2018-January/013078.html .
@dvc94ch There's some overlap. But lldb
is mostly handled by Apple and Google people.
Nice, things are starting to move. Google contributes vFlash command support to lldb just a few days back, however it has been reverted due to virtual/physical address confusion. Otherwise I'd had done a quick check of whether the OpenOCD situation has improved.
@dvc94ch You mentioned msp430 in your comment, did you mean riscv?
Yes, I copy pasted it from the link I mentioned.
@dvc94ch Interesting, I'll need to check out this msp430 lldb port.
Update: LLD is now being shipped with the Rust toolchain. You'll need Xargo v0.3.11 to be able to use it though.
Unfortunately, it doesn't seem to be ready for prime (at least when dealing with ARM Cortex-M). See the issues I encountered in https://github.com/japaric/cortex-m-rt/issues/53#issuecomment-371972935.
So there's a little known feature of binutils that you can configure it with --enable-targets=all
and then it is also mutli-target! Or rather, every tool is except for GNU Assembler, but we don't use that anything. Perhaps we should also distribute this, at least as a stop-gap until lld
has platform parity?