orbit icon indicating copy to clipboard operation
orbit copied to clipboard

Blueprint from command line

Open darsor opened this issue 3 months ago • 10 comments

Is your feature request related to a problem? Please describe. It looks like using orbit build and orbit test as entry points to building/testing might be too limiting to integrate with our existing tooling. However, it would be very useful to go the other direction, where the existing tooling calls into orbit to get the necessary information (e.g., locations of dependencies, via a blueprint).

Describe the solution you'd like An option for the orbit build and maybe orbit test subcommands to output a blueprint to stdout.

Describe alternatives you've considered This is probably possible with a custom target, but built-in support would be better.

darsor avatar Oct 19 '25 14:10 darsor

Orbit has a couple of commands dedicated to providing information to stdout for users to ingest in downstream scripts/processes. One of those is the orbit tree command, which may be what you are interested in. By using the --json flag, you can get valid json to stdout of a project's design unit hierarchical structure. See JSON Output in The Orbit Book for more information. By also using the --edge project option for the orbit tree command, you can get the hierarchical structure at the project-level. Let me know if this command works for you or it needs more extension, and I would also be curious if you could elaborate a little more about the limitations build and test entry points provide and how we can maybe better those.

chaseruskin avatar Oct 26 '25 22:10 chaseruskin

Thanks for your response. I'm trying this out now on a real project and running into few issues, even with orbit tree --json.

Here's a few example limitations of orbit test:

  • Let's say I want to run all the tests for a project to make sure I didn't break anything (possibly in a CI environment). I would have to manually list not only every device to test, but every testbench I wanted to test them with.
  • At $WORK we typically use cocotb to run tests. Now the concept of "testbench" to run against is different, since the testbench is a python file, not a HDL entity managed by Orbit. Doesn't really seem to be compatible.

Now here's where Orbit could really shine. We use pytest to run all the cocotb testbenches. This is great because it gives us a simple, powerful front-end to specify and run any subset of tests. The annoying bit is that for each test we have to manually define the list of HDL files to include for the test. Ideally we could

  1. Call pytest
  2. Pytest calls the python test function
  3. The python test function uses Orbit to look up all the dependencies for the DUT and their local paths, to compile and run the simulation. Notably, this includes "remote" dependencies managed by Orbit.

I'm not seeing any way to achieve that right now. I don't even see a way to to ask Orbit where the local copy of a remote dependency can be found (for simulation+synthesis), without going through the build/test commands. It seems like essentially what I want is a blueprint for the DUT that lists all its dependencies, hence the original request.

darsor avatar Nov 12 '25 23:11 darsor

I'll add a minor note that the --json tree is missing version information:

$ orbit tree --edges project
axi_stream_fifo:0.1.0
└── axi_stream_pkg:0.1.5
$ orbit tree --edges project --json
[{"name":"axi_stream_fifo","targets":[],"sources":["axi_stream_pkg"]},{"name":"axi_stream_pkg","targets":["axi_stream_fifo"],"sources":[]}]

darsor avatar Nov 12 '25 23:11 darsor

Thanks for your response. I'm trying this out now on a real project and running into few issues, even with orbit tree --json.

Here's a few example limitations of orbit test:

  • Let's say I want to run all the tests for a project to make sure I didn't break anything (possibly in a CI environment). I would have to manually list not only every device to test, but every testbench I wanted to test them with.

You could use the --all option for orbit test (do not provide a --dut nor a --tb), and then have your backend target script detect this and then potentially then internally call orbit tree --json to identify all top levels and then run simulations for each test. You can also make a custom fileset that may store the list of generics/units you wish to regression test and then have your backend script load this when using orbit test --all.

  • At $WORK we typically use cocotb to run tests. Now the concept of "testbench" to run against is different, since the testbench is a python file, not a HDL entity managed by Orbit. Doesn't really seem to be compatible.

I use cocotb for testing as well and have yet to run into any issues (which there still very well may be issues). The key here is that orbit test can run without an identified "HDL testbench", just supply the --dut value. For example, in the Glyph project, it uses a custom target that combines Ninja (build system like Make), GHDL, cocotb, and a custom verification library called Verb that relies on cocotb. The configuration for this target is like so (stored in a config.toml tied to Verb):

[[target]]
name = "goku"
command = ["python3", "goku.py"]
description = "Run simulations using cocotb + verb with GHDL"
fileset.cocotb_py = "{{ orbit.dut.name }}_tb.py"
plans = ["json"]
build = false
test = true

The Verb project contains Python scripts that show how to use the custom COCOTB-PY files to set the DUT's python testbench and all other cocotb-related settings. This target builds upon another general-purpose target script repository called Aquila, which I encourage you to check out and potentially get inspiration for how to write backend target scripts.

Calling a test for my cocotb workflow then is like so, where I also have a hamming_enc_tb.py file with the test cases for this DUT that will get automatically collected into the blueprint under the COCOTB-PY fileset and then processed by the target:

orbit test --target goku --dut hamming_enc -- -r sim -g K=4

Now here's where Orbit could really shine. We use pytest to run all the cocotb testbenches. This is great because it gives us a simple, powerful front-end to specify and run any subset of tests. The annoying bit is that for each test we have to manually define the list of HDL files to include for the test. Ideally we could

  1. Call pytest
  2. Pytest calls the python test function
  3. The python test function uses Orbit to look up all the dependencies for the DUT and their local paths, to compile and run the simulation. Notably, this includes "remote" dependencies managed by Orbit.

I'm not seeing any way to achieve that right now. I don't even see a way to to ask Orbit where the local copy of a remote dependency can be found (for simulation+synthesis), without going through the build/test commands. It seems like essentially what I want is a blueprint for the DUT that lists all its dependencies, hence the original request.

Orbit also sets environment variables before invoking a target script during the build/test process, which may be helpful to your process. I believe with the above examples I have listed, you may be still able to achieve your goal by writing a backend target that can incorporate cocotb and pytest.

chaseruskin avatar Nov 17 '25 04:11 chaseruskin

I'll add a minor note that the --json tree is missing version information:

$ orbit tree --edges project
axi_stream_fifo:0.1.0
└── axi_stream_pkg:0.1.5
$ orbit tree --edges project --json
[{"name":"axi_stream_fifo","targets":[],"sources":["axi_stream_pkg"]},{"name":"axi_stream_pkg","targets":["axi_stream_fifo"],"sources":[]}]

And yes, good catch! I think the tree JSON output can use some more information packed into each node, such as the version as you have pointed out. In coming release more information will be added to project-level nodes when using the --json flag for orbit tree.

chaseruskin avatar Nov 17 '25 04:11 chaseruskin

You could use the --all option for orbit test (do not provide a --dut nor a --tb), and then have your backend target script detect this and then potentially then internally call orbit tree --json to identify all top levels and then run simulations for each test.

I'll give this a try, though I do want to point out the difference in mindset for these different approaches.

What I'm personally looking for in Orbit is a great HDL package manager, something that does not yet exist. I'm not opposed to using the package manager as a build/test front-end as well, but in my opinion it has to be a very clean and tight integration in order to be useful (think cargo run/test). I have a hard time seeing that be the case for HDLs, simply because they were not designed that way and there's no standardized way of doing those things like in a modern language like Rust.

The flexibility of Orbit is a valiant effort to make it work despite those limitations, and I sincerely hope it succeeds.

However, I think first and foremost Orbit should strive to be a great package manager, and that means integrating easily with other build/test front-ends and not expecting every user to use Orbit for those tasks.

I, for example, am very happy with the pytest test discovery, runner, etc., and I think that it will be hard to beat. For building I really only have one or two targets and a Makefile is totally sufficient for this. But if there were an easy way to expose the information Orbit already has about my project and its dependencies to these tools, to facilitate dependency resolution, file discovery, compile order, etc., that would be awesome.

darsor avatar Nov 17 '25 06:11 darsor

and that means integrating easily with other build/test front-ends and not expecting every user to use Orbit for those tasks.

This is especially true for first-time users. It's hard to justify throwing out a working build/test system to replace it something entirely new. But if Orbit can be easily integrated with and provide value to existing tooling, then you get your foot in the door, so to speak. Then maybe down the road it makes sense to port more things into the Orbit front-end.

darsor avatar Nov 17 '25 06:11 darsor

and that means integrating easily with other build/test front-ends and not expecting every user to use Orbit for those tasks.

This is especially true for first-time users. It's hard to justify throwing out a working build/test system to replace it something entirely new. But if Orbit can be easily integrated with and provide value to existing tooling, then you get your foot in the door, so to speak. Then maybe down the road it makes sense to port more things into the Orbit front-end.

You do not have to throw out your working build/test system and replace it with something entirely new. The build/test interface provided by Orbit is designed to support whatever existing system you are using. Under hood, Orbit just calls whatever command you configured for that target once it has finished the planning stage (generated the blueprint and set environment variables). However, it is inevitable if one is using Orbit that you will have to make modifications to the "front" of your existing system (some form of glue logic) to accept the data Orbit provides you (env variables, blueprint file). This is true whether this information Orbit gives you is printed to stdout, lives in a file, or is set in environment variables.

If you need any help trying to load the data generated by Orbit into your existing build system, feel free to reach out with further questions and/or feedback

chaseruskin avatar Nov 18 '25 01:11 chaseruskin

What I'm personally looking for in Orbit is a great HDL package manager, something that does not yet exist. I'm not opposed to using the package manager as a build/test front-end as well, but in my opinion it has to be a very clean and tight integration in order to be useful (think cargo run/test). I have a hard time seeing that be the case for HDLs, simply because they were not designed that way and there's no standardized way of doing those things like in a modern language like Rust.

The flexibility of Orbit is a valiant effort to make it work despite those limitations, and I sincerely hope it succeeds.

The point you mention about no standardized way of doing things is the reason Orbit has no default backend, no default bundled target, essentially it has no build system. If you call build/test with no targets you have personally configured, it does... nothing. You must bring your build system and configure it as a target within the context of Orbit to get any meaningful output (i.e. whatever output you desire from your build system). The build/test commands are glorified wrappers to your build system that really just set up the process with the inputs your build system may need (the blueprint, environment variables) while providing a consistent interface. There are many EDA tools that exist, with many differing versions, that may or may not work on different platforms, for different devices, etc., which is why the backend is entirely open/exposed to the developer to integrate their own workflows, so that is works exactly how they want it.

I, for example, am very happy with the pytest test discovery, runner, etc., and I think that it will be hard to beat. For building I really only have one or two targets and a Makefile is totally sufficient for this. But if there were an easy way to expose the information Orbit already has about my project and its dependencies to these tools, to facilitate dependency resolution, file discovery, compile order, etc., that would be awesome.

I feel confident that you can integrate your build system as a target to continue to achieve all the things you currently are doing, after all, Orbit is just running whatever command/script you configured it to run when calling build/test for that configured target. My argument for the easy way to get the data from Orbit to your build system is through the respective calls to build/test because of how flexible it is. Of course I'm sure there are flaws, but I encourage you try to integrate your existing build system into Orbit as one or more targets and reach out with more questions you have about this approach.

In my opinion Orbit is just a package manager, not a build system, but over the years of its development some have argued it is a build system, which I guess it may be because it completes what I would call "one phase" of the build process (preparing/collecting all inputs). The way I see how package managers and build systems interact is like so:

  1. users interface with a package manager to request operations on their codebase
  2. package managers take operation and do all heavy-lifting of collecting source data and providing it as input to a build system
  3. the build system takes the source data as input and produces your final desired output product

Orbit is the solution to step 2, hence does not include step 3. However, it provides a consistent interface to be able to plug in your own build systems and exchange the data from step 2 to step 3. In Rust, this is also case with Cargo. Users never call rustc, they call cargo build, cargo test. Cargo under the hood handles the calls to the Rust compiler (their build tool). Orbit operates the same way as Cargo, except under the key observation you've mentioned, we don't use just one single build tool, we have many that are wildly different in how you interact with them. Orbit's flexibility in configuring whatever build tools you want is a feature specifically needed in this community compared to software like Rust.

chaseruskin avatar Nov 18 '25 02:11 chaseruskin

Thanks for your response. I definitely agree with the need for flexibility. I wonder if there would be a worthwhile way to encourage publishing/sharing build/test configurations in the future. But that's getting off topic.

I'll spend some more time getting this integrated with our existing tooling and let you know what the result is and where the pain points are.

darsor avatar Nov 18 '25 05:11 darsor