rules_rust icon indicating copy to clipboard operation
rules_rust copied to clipboard

rust-analyzer via bazel documentation

Open Danielkonge opened this issue 1 year ago • 5 comments

I have a rust-analyzer setup that seems to work okay with rules_rust and bazel now, but I am not that used to using bazel/rules_rust yet, so I would like to ask if what I do makes sense or if there is a better way.

I thought this might fit better in Discussions, but decided to make it an issue, since I think the problem is common enough that there should be better documentation. I.e., this issue is really asking for more documentation on setting up rust-analyzer with rules_rust. (I have seen http://bazelbuild.github.io/rules_rust/rust_analyzer.html, but think it would be better to write more about the rust-analyzer setup directly, so it covers more than just VSCode. Also, there seems to be a few typos in the docs - see the NOTE later.)

What I have done is to put something like the following in my .bazelrc:

build:rust_analyzer --keep_going
build:rust_analyzer --@rules_rust//:error_format=json
build:rust_analyzer --@rules_rust//:rustc_output_diagnostics --output_groups=+rustc_rmeta_output,+rustc_output
build:rust_analyzer --@rules_rust//:extra_rustc_flag=-Copt-level=0

on top of using rust_register_toolchains in my WORKSPACE file. (Also there is an alias for gen_rust_project.)

NOTE: I use rustc_output_diagnostics, rustc_rmeta_output and rustc_output but the docs say output_diagnostics, rust_lib_rustc_output and rust_metadata_rustc_output.

(So if nothing else, I think this part of the docs needs an update.)

Then in neovim I have a setup like

local cwd = vim.fn.getcwd()
local bazel_dir = '/path/to/bazel/directory'
local is_in_bazel_dir = string.sub(cwd, 1, #bazel_dir) == bazel_dir

local lspconfig = require('lspconfig')
lspconfig.rust_analyzer.setup({
  -- other setup ...
  settings = is_in_bazel_dir and {
    ['rust-analyzer'] = {
      cargo = {
        buildScripts = {
          overrideCommand = {
            'bazel',
            'run',
            '//:gen_rust_project',
            '--config=rust_analyzer',
            '//...',
          }
        },
      },
      check = {
        overrideCommand = {
          'bazel',
          'build',
          '--config=rust_analyzer',
          '//...',
        }
      },
    }
  } or nil
})

which tells rust-analyzer to use

rust-analyzer.cargo.buildScripts.overridCommand = bazel run //:gen_rust_project --config=rust_analyzer //...

and

rust-analyzer.check.overridCommand = bazel build --config=rust_analyzer //...

when in my bazel directory and just the standard setup otherwise. I am not sure how to set it up per directory in other editors, but I think the VSCode setup should just be something like:

{
  "rust-analyzer.cargo.buildScripts.overrideCommand": ["bazel", "run", "--config=rust_analyzer", "//:gen_rust_project","//..."],
  "rust-analyzer.check.overrideCommand": ["bazel", "build", "--config=rust_analyzer", "//..."]
}

My question is now if this kind of setup makes sense? Am I doing something weird in the buildScripts override?

Also, is there an easy way to not have my rust-analyzer builds interfere with my usual build cache (because of the change of flags)? It would be nice to have it behave even more like cargo check although my current setup works okay.

Either way, I think slightly better documentation for setting up rust-analyzer would be good.

Danielkonge avatar Aug 18 '24 16:08 Danielkonge

I don't think that cargo.buildScripts.overrideCommand should run gen_rust_project. That is the role of rust-analyzer.workspace.discoverConfig, and support for that is being introduced in https://github.com/bazelbuild/rules_rust/pull/3073.

According to the rust-analyzer docs, both cargo.buildScripts.overrideCommand and rust-analyzer.check.overrideCommand run pretty similar cargo check commands by default:

cargo check --quiet --workspace --message-format=json --all-targets --keep-going
cargo check --workspace --message-format=json --all-targets

For cargo.check.overrideCommand, there is the opportunity to use $saved_file to only build a subset of the workspace, but I'm not sure how best to utilise this.

As for which output groups we need for each command, I'm also not sure.

cameron-martin avatar Jan 12 '25 15:01 cameron-martin

Sorry for the slow reply.

I don't think that cargo.buildScripts.overrideCommand should run gen_rust_project. That is the role of rust-analyzer.workspace.discoverConfig, and support for that is being introduced in https://github.com/bazelbuild/rules_rust/pull/3073.

I think you are right, this was written some time ago and I have since discovered that the build script command was useless.

As mentioned above, I think what I wrote works okay, my main issue is just that checks interfere with my usual build-cache. I have been told that this should not be too hard to fix, but I haven't really had time to look into it.

Other than that, my main comment in this issue was that I think there are (were?) typos in the docs, since the output groups I saw in the code didn't match the docs, and it would in general be nice with better docs for setting up something like a cargo check.

Danielkonge avatar Mar 14 '25 21:03 Danielkonge

Related: https://github.com/bazelbuild/rules_rust/issues/1649

cameron-martin avatar May 09 '25 16:05 cameron-martin

I thought that the "cargo check"-like functionality described here worked before, but it seems like it doesn't anymore. I just get errors like this:

2025-05-09T17:32:41.62088618+01:00 ERROR flycheck 0: File with cargo diagnostic not found in VFS: file not found: /home/cameron/Repos/camos/bazel-out/k8-fastbuild-ST-e5304c15b1af/bin/crates/camos_uefi/src/main.rs

This makes sense, because without first resolving symlinks these are not files within the editor's workspace. Maybe rust-analyzer has stopped resolving symlinks here?

cameron-martin avatar May 09 '25 16:05 cameron-martin

Was using Bazel for a rust project and ended up going down a rabbit hole to try and work around the cache issues.

First off, the environment used to run bazel from rust-analyzer will probably be sufficiently different to thrash the cache even before we get to anything else, so you need:

build --incompatible_strict_action_env

in your .bazelrc. Next up, there are two issues with doing a normal bazel build and passing command-line flags to change the output:

  • The CLI flags themselves will toss the analysis cache.
  • Once the project builds successfully, bazel of course won't run rustc again...which means that warnings disappear after the first save.

In order to work around this, I created a wrapper rule that uses transitions to build the dependencies and generates a wrapper script that cats the rustc output:

load("@aspect_bazel_lib//lib:paths.bzl", "BASH_RLOCATION_FUNCTION", "to_rlocation_path")
load("@rules_rust//rust:rust_common.bzl", "CrateInfo")

def _transition_output_impl(settings, attr):
    return {
        "@rules_rust//:error_format": "json",
        "@rules_rust//:rustc_output_diagnostics": True,
    }

_transition_output = transition(
    implementation = _transition_output_impl,
    inputs = [],
    outputs = [
        "@rules_rust//:error_format",
        "@rules_rust//:rustc_output_diagnostics",
    ],
)

def _rust_analyzer_check_impl(ctx):
    replay_script = ctx.actions.declare_file("%s_replay.sh" % ctx.label.name)

    rustc_outputs = []
    for dep in ctx.attr.deps:
        info = dep[OutputGroupInfo]
        if "rustc_rmeta_output" in info:
            rustc_outputs.extend(info.rustc_rmeta_output.to_list())
        elif "rustc_output" in info:
            # TODO: somehow generate rmeta for binaries too?
            rustc_outputs.extend(info.rustc_output.to_list())
        else:
            fail("%s has no rustc outputs" % dep)

    ctx.actions.write(
        output = replay_script,
        content = BASH_RLOCATION_FUNCTION + "\ncat " + " ".join([
            "$(rlocation %s)" % to_rlocation_path(ctx, output)
            for output in rustc_outputs
        ]),
        is_executable = True,
    )
    return [
        DefaultInfo(
            executable = replay_script,
            files = depset(rustc_outputs),
            runfiles = ctx.runfiles(rustc_outputs).merge(ctx.attr._runfiles.default_runfiles),
        ),
    ]

rust_analyzer_check = rule(
    _rust_analyzer_check_impl,
    attrs = {
        "_allowlist_function_transition": attr.label(
            default = "@bazel_tools//tools/allowlists/function_transition_allowlist",
        ),
        "_runfiles": attr.label(default = "@bazel_tools//tools/bash/runfiles"),
        "deps": attr.label_list(
            allow_empty = False,
            cfg = _transition_output,
            providers = [OutputGroupInfo, CrateInfo],
        ),
    },
    executable = True,
)

And then you can do

rust_analyzer_check(
  name = "check",
  deps = [":my_rust_binary"],
)

and now bazel run //:check will print out all the JSON diagnostics, without touching the cache, and including warnings.

(Note the way it takes multiple dependencies is kinda broken, because if one fails then you'll stop getting warnings for the other ones; their output never gets replayed! I didn't get around to fixing that though, because I ditched this for reasons outlined below...)

The thing is, this has a big inherent issue:

        if "rustc_rmeta_output" in info:
            rustc_outputs.extend(info.rustc_rmeta_output.to_list())
        elif "rustc_output" in info:
            # TODO: somehow generate rmeta for binaries too?
            rustc_outputs.extend(info.rustc_output.to_list())

rust_binary rules always build the entire thing, rather than just building the .rmeta file like cargo does, so the runtime of this is pretty slow compared to cargo check. Working around that would almost certainly require rules_rust changes.

Around this point though, I realized that rust_clippy already kinda does what I want: it doesn't build the output all the way. But its aspect behaves a bit differently in that output is always written to the file, even on error, so the "save the output and replay" trick no longer works. Thus I ended up with:

load("@rules_rust//rust:defs.bzl", "rust_clippy_aspect")
load("@rules_rust//rust:rust_common.bzl", "ClippyInfo", "CrateInfo")

def _transition_output_impl(settings, attr):
    return {
        "@rules_rust//:capture_clippy_output": True,
        "@rules_rust//:error_format": "json",
    }

_transition_output = transition(
    implementation = _transition_output_impl,
    inputs = [],
    outputs = [
        "@rules_rust//:capture_clippy_output",
        "@rules_rust//:error_format",
    ],
)

def _rust_clippy_json_impl(ctx):
    return [
        DefaultInfo(
            files = depset(ctx.attr.dep[0][ClippyInfo].output.to_list()),
        ),
    ]

rust_clippy_json = rule(
    _rust_clippy_json_impl,
    attrs = {
        "_allowlist_function_transition": attr.label(
            default = "@bazel_tools//tools/allowlists/function_transition_allowlist",
        ),
        "dep": attr.label(
            cfg = _transition_output,
            providers = [CrateInfo],
            aspects = [rust_clippy_aspect],
        ),
    },
)

Now I can put this in my BUILD:

rust_clippy_json(
    name = "clippy_json",
    dep = ":my_rust_project",
)

and then get editor diagnostics by cat-ting the output files manually via a shell wrapper:

bazel --quiet build //:clippy_json 2>/dev/null || :
cat $(bazel --quiet cquery //:clippy_json --output=files)

The result works pretty quickly (the entire thing runs in ~2s) and also gives me the more verbose clippy diagnostics, which I should probably be using anyway. Unfortunately, the final cquery adds another half-second to the time, but ehh this is good enough for me.

refi64 avatar Jun 04 '25 01:06 refi64

Hey @refi64, I'm following your setup (thanks btw!) and the json produced is mostly the same as the json produced by the default cargo check command that rust-analyzer uses. It's missing information about the target crate/path. I'm wondering if you did any work on top of this to get your IDE to use the json output to do syntax/error highlighting?

I'm mostly interested in the error (squiggly line) highlighting. The error json output is isolated already in bazel-out/_tmp/actions/stderr-2. I'm trying to solve this now by writing a script that runs clippy_json and adds the minimum necessary file path information to the json for the error highlighting.

AlexOrozco1256 avatar Jun 24 '25 17:06 AlexOrozco1256

I have been able to setup my rust bazel project to work with vscode's rust-analyzer (not the experimental bazel-rust-analyzer).

I was unable to reproduce the instructions for capturing the json output with cquery. Apart from this, the json being outputted with --@rules_rust//rust/settings:error_format=json is the rustc diagnostics json which needs to be wrapped by FromCompiler to be understood by IDEs.

I wrote this script to wrap the captured json and feed it back into the rust-analyzer by overriding the cargo check command which is used on each save to produce real time diagnostics (i.e error highlighting/squiggly lines and suggestions).

#!/bin/bash
set -o errexit

PREFIX='{"reason":"compiler-message","package_id":"","target":{"kind":[""],"crate_types":[""],"name":"","src_path":"","edition":"2021","doc":true,"doctest":true,"test":true},"message":'
SUFFIX='}'
OUTPUT_BASE="/tmp/bazel-rust-analyzer"
SAVED_FILE="$1"
PATH_PREFIX="$2"
run_analyzer() {
  FILE_PATH="${SAVED_FILE/#"${PATH_PREFIX}/"}"
  FILE_TARGET=$(bazel query "${FILE_PATH}")
  BAZEL_TARGET=$(bazel query "attr('srcs', ${FILE_TARGET}, ${FILE_TARGET//:*/}:*)")
  set +o errexit
  rustfmt "${FILE_PATH}"
  # can substitute bazel -> builders/tools/bazel-debian (slight performance hit)
  bazel --output_base="${OUTPUT_BASE}" build --@rules_rust//rust/settings:error_format=json "${BAZEL_TARGET}"
  RC=$?
  set -o errexit
  if [[ "$RC" -ne 0 ]]; then
    STD_ERR_DIR="${OUTPUT_BASE}/execroot/_main/bazel-out/_tmp/actions"
    while read -r line; do
      echo "${PREFIX}$line${SUFFIX}"
    done <<< "$(cat ${STD_ERR_DIR}/stderr-*)"
  fi
}

run_analyzer

Here I use a bash script to build the main target in my rust project using --@rules_rust//rust/settings:error_format=json, I then read the last stderr file containing the json error output and wrap it with the missing FromCompiler json that vscode needs to understand the diagnostics and display them

AlexOrozco1256 avatar Jun 30 '25 16:06 AlexOrozco1256

@AlexOrozco1256 ah the "trick" is that I didn't override cargo.buildScripts. Just overriding check is enough to get diagnostics working.

ideally I'd also override cargo.buildScripts; I have this entire slightly questionable cue script that messes with the output to get it in the right format like you observed:

command: rust_analyzer_build_scripts: {
	cargoLockRaw: file.Read & {
		filename: "Cargo.lock"
		contents: string
	}
	rustProjectRaw: file.Read & {
		filename: "y.json"
		contents: string
	}
	#cargoLock:   toml.Unmarshal(cargoLockRaw.contents)
	#rustProject: json.Unmarshal(rustProjectRaw.contents)

	print: cli.Print & {
		text: strings.Join([for pkg in #cargoLock.package
			let matchingCrates = [for crate in #rustProject.project.crates
				if pkg.name == crate.env.CARGO_PKG_NAME && pkg.version == crate.env.CARGO_PKG_VERSION {
					crate
				}]
			if len(matchingCrates) > 0
			let crate = matchingCrates[0]
			if crate.is_proc_macro {
				json.Marshal({
					reason:        "compiler-artifact"
					package_id:    "\(pkg.source)#\(pkg.name)@\(pkg.version)"
					manifest_path: "\(crate.env.CARGO_MANIFEST_DIR)/Cargo.toml"
					target: {
						kind: ["proc-macro"]
						crate_types: ["proc-macro"]
						name:     pkg.name
						src_path: crate.root_module
						edition:  crate.edition
					}
					profile: {
						opt_level:        "0"
						debug_assertions: true
						overflow_checks:  true
						test:             true
					}
					features: []
					filenames: [crate.proc_macro_dylib_path]
					executable: null
					fresh:      true
				})
			}], "\n")
	}
}

where y.json held the output of bazel run @rules_rust//tools/rust_analyzer:discover_bazel_rust_project (I hadn't gotten around to actually writing this all up yet).

but...then I hit a really annoying issue: the proc-macro compiled libraries need to have an ABI version that matches the rust-analyzer-proc-macro-srv that rust-analyzer is using. or, a bit less convoluted:

  • rust toolchains include a binary called rust-analyzer-proc-macro-srv, whose job is to load the built proc macros for rust-analyzer to use.
  • in order for this to work, the ABI version it has needs to match the ABI version of the built proc macros.
  • this in turn is only guaranteed if you use the proc-macro-srv from the toolchain that you built the proc macros with.
  • rust-analyzer will locate this file as relative to the configured sysroot.

This is technically fixable by just explicitly setting the sysroot to point to bazel's copy, or sticking bazel's rustc first in the $PATH. At this point though I got weary of the build system engineering, and then I forgot about those details, oops. though this is all pseudo-nerd-sniping me into playing with it again...

refi64 avatar Jul 07 '25 01:07 refi64

so it turns out my entire digression above was completely unrelated: I had misread the exact message format, and the whole reason proc macros were broken for me was because I had accidentally left behind a linkedProjects setting from when I was using a plain rust-project.json, and that was causing the cargo-based project to silently overwrite the discovered project. oops!

Apart from this, the json being outputted with --@rules_rust//rust/settings:error_format=json is the rustc diagnostics json which needs to be wrapped by FromCompiler to be understood by IDEs.

this should be entirely fine as-is. rust-analyzer has explicitly supported it for years. I'd be curious to know what your actual editor setup is like.

refi64 avatar Jul 25 '25 04:07 refi64

Yes rust-analyzer supports providing your own diagnostic json. However the output that bazel produces now isn't recognizable by vscode.

When using rust-analzyer with Cargo, the default check command in vscode to get real time highlighting/diagnostics is cargo check --message-format=json

This output is wrapped in the struct I linked above FromCompiler. Bazel doesn't wrap the json in the FromCompiler struct which seems to be why vscode doesn't recognize the output.

The script I added simply wraps the bazel output from bazel build ... --@rules_rust//rust/settings:error_format=json with FromCompiler and this helps me get error/syntax highlighting in vscode just like with the cargo projects I work with.

AlexOrozco1256 avatar Jul 26 '25 00:07 AlexOrozco1256

see the section at the bottom here to see what command I'm overriding in my settings.json for vscode

{
    "rust-analyzer.check.overrideCommand": [
            "${workspaceFolder}/bazel/rust-analyzer.sh",
            "$saved_file",
            "${workspaceFolder}"
        ]
}

AlexOrozco1256 avatar Jul 26 '25 00:07 AlexOrozco1256